Sample records for linear modelling results

  1. Identifiability Results for Several Classes of Linear Compartment Models.

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa


    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  2. Modeling results for a linear simulator of a divertor

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.


    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach ∼ 1 Gw/m 2 along the magnetic fieldlines and > 10 MW/m 2 on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report

  3. Delta-tilde interpretation of standard linear mixed model results

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra


    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  4. Linear Models

    Searle, Shayle R


    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  5. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M


    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  6. Foundations of linear and generalized linear models

    Agresti, Alan


    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  7. linear-quadratic-linear model

    Tanwiwat Jaikuna


    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  8. Dimension of linear models

    Høskuldsson, Agnar


    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  9. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)


    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  10. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression

    Lemieux Sébastien


    Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  11. Test results for three prototype models of a linear induction launcher

    Zabar, Z.; Lu, X.N.; He, J.L.; Birenbaum, L.; Levi, E.; Kuznetsov, S.B.; Nahemow, M.D.


    This paper reports on the work on the linear induction launcher (LIL) started with an analytical study tht was followed by computer simulations and then was tested by laboratory models. Two mathematical representations have been developed to describe the launcher. The first, based on the field approach with sinusoidal excitation, has been validated by static tests on a small scale prototype fed at constant current and variable frequency. The second, a transient representation using computer simulation allows consideration of energization by means of a capacitor bank and a power conditioner. Tests performed on three small-scale prototypes up to 100 m/s muzzle velocities show good agreement with predicted performance

  12. Dimension of linear models

    Høskuldsson, Agnar


    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  13. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Atas, Dogu; Karadag, Özge


    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  14. Non linear viscoelastic models

    Agerkvist, Finn T.


    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  15. Linear models with R

    Faraway, Julian J


    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  16. Introduction to generalized linear models

    Dobson, Annette J


    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  17. Comparison of TOPEX/Poseidon Sea Level and Linear Model Results forced by Various Wind Products for the Tropical Pacific

    Hackert, Eric C.; Busalacchi, Antonio J.


    The goal of this paper is to compare TOPEX/Posaidon (T/P) sea level with sea level results from linear ocean model experiments forced by several different wind products for the tropical Pacific. During the period of this study (October 1992 - October 1995), available wind products include satellite winds from the ERS-1 scatterometer product of [HALP 97] and the passive microwave analysis of SSMI winds produced using the variational analysis method (VAM) of [ATLA 91]. In addition, atmospheric GCM winds from the NCEP reanalysis [KALN 96], ECMWF analysis [ECMW94], and the Goddard EOS-1 (GEOS-1) reanalysis experiment [SCHU 93] are available for comparison. The observed ship wind analysis of FSU [STRI 92] is also included in this study. The linear model of [CANE 84] is used as a transfer function to test the quality of each of these wind products for the tropical Pacific. The various wind products are judged by comparing the wind-forced model sea level results against the T/P sea level anomalies. Correlation and RMS difference maps show how well each wind product does in reproducing the T/P sea level signal. These results are summarized in a table showing area average correlations and RMS differences. The large-scale low-frequency temporal signal is reproduced by all of the wind products, However, significant differences exist in both amplitude and phase on regional scales. In general, the model results forced by satellite winds do a better job reproducing the T/P signal (i.e. have a higher average correlation and lower RMS difference) than the results forced by atmospheric model winds.

  18. Explorative methods in linear models

    Høskuldsson, Agnar


    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  19. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth


    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...



    This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total

  1. A primer on linear models

    Monahan, John F


    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  2. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    Melcher, Kevin J.


    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended

  3. Dynamic Linear Models with R

    Campagnoli, Patrizia; Petris, Giovanni


    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  4. Research on the operation characteristics of a free-piston linear generator: Numerical model and experimental results

    Guo, Chendong; Feng, Huihua; Jia, Boru; Zuo, Zhengxing; Guo, Yuyao; Roskilly, Tony


    Highlights: • The operation process of free-piston linear generator is investigated. • The larger the motor force at the starting process, the fewer circulations of the piston reciprocating to meet ignition condition. • The “gradually switching strategy” is the best strategy in the intermediate process. • During the generating process, engines indicated power is 2.9 kW with an efficiency of 37.3% under medium load. - Abstract: Free piston linear generator (FPLG) shows unique operation characteristics due to the elimination of crankshaft and connecting rod mechanism. This paper investigates its operation characteristics during each operating process based on the simulation and experiment results. During the starting process, the larger motor force during the starting process, the fewer times of reciprocating pistons which meet the condition of ignition. When the motor force reached 300 N, the prototype could adopt one-stroke starting strategy. During the intermediate process, it was found that the “gradually switching strategy” could help to achieve a smoother operation during the intermediate process. And the values of the operation parameters after the intermediate process were lower than those before the intermediate process. During the generating process, cycle-to-cycle variations were observed for piston TDC and in-cylinder gas pressure from the experimental results. According to the experimental results of the FPLG during the generating process, the calculated engine indicated power is 2.9 kW, and the corresponding indicated thermal efficiency is 37.3%. Additionally, based on the comparison of the FPLG performance, it is found that the parameters of the FPLG during the generating process are smaller than those when it was operated during the second stage of the starting process, while much higher than those during the first stage of the starting process.

  5. Multicollinearity in hierarchical linear models.

    Yu, Han; Jiang, Shanhe; Land, Kenneth C


    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Modelling Loudspeaker Non-Linearities

    Agerkvist, Finn T.


    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  7. (Non) linear regression modelling

    Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y.


    We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or

  8. Generalized, Linear, and Mixed Models

    McCulloch, Charles E; Neuhaus, John M


    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  9. Sparse Linear Identifiable Multivariate Modeling

    Henao, Ricardo; Winther, Ole


    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  10. Comparing linear probability model coefficients across groups

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt


    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  11. Augmenting Data with Published Results in Bayesian Linear Regression

    de Leeuw, Christiaan; Klugkist, Irene


    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  12. Parameterized Linear Longitudinal Airship Model

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph


    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  13. Nonlinear Modeling by Assembling Piecewise Linear Models

    Yao, Weigang; Liou, Meng-Sing


    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  14. Decomposable log-linear models

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  15. Linear and Generalized Linear Mixed Models and Their Applications

    Jiang, Jiming


    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  16. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Ker, H. W.


    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  17. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Maria Åström


    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  18. Statistical Tests for Mixed Linear Models

    Khuri, André I; Sinha, Bimal K


    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  19. Multivariate covariance generalized linear models

    Bonat, W. H.; Jørgensen, Bent


    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  20. Matrix algebra for linear models

    Gruber, Marvin H J


    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  1. Nonabelian Gauged Linear Sigma Model

    Yongbin RUAN


    The gauged linear sigma model (GLSM for short) is a 2d quantum field theory introduced by Witten twenty years ago.Since then,it has been investigated extensively in physics by Hori and others.Recently,an algebro-geometric theory (for both abelian and nonabelian GLSMs) was developed by the author and his collaborators so that he can start to rigorously compute its invariants and check against physical predications.The abelian GLSM was relatively better understood and is the focus of current mathematical investigation.In this article,the author would like to look over the horizon and consider the nonabelian GLSM.The nonabelian case possesses some new features unavailable to the abelian GLSM.To aid the future mathematical development,the author surveys some of the key problems inspired by physics in the nonabelian GLSM.

  2. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    Szyld, D. B.


    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  3. Non-linear Loudspeaker Unit Modelling

    Pedersen, Bo Rohde; Agerkvist, Finn T.


    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  4. From spiking neuron models to linear-nonlinear models.

    Ostojic, Srdjan; Brunel, Nicolas


    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  5. Medium-dose-rate brachytherapy of cancer of the cervix: preliminary results of a prospectively designed schedule based on the linear-quadratic model

    Leborgne, Felix; Fowler, Jack F.; Leborgne, Jose H.; Zubizarreta, Eduardo; Curochquin, Rene


    Purpose: To compare results and complications of our previous low-dose-rate (LDR) brachytherapy schedule for early-stage cancer of the cervix, with a prospectively designed medium-dose-rate (MDR) schedule, based on the linear-quadratic model (LQ). Methods and Materials: A combination of brachytherapy, external beam pelvic and parametrial irradiation was used in 102 consecutive Stage Ib-IIb LDR treated patients (1986-1990) and 42 equally staged MDR treated patients (1994-1996). The planned MDR schedule consisted of three insertions on three treatment days with six 8-Gy brachytherapy fractions to Point A, two on each treatment day with an interfraction interval of 6 hours, plus 18 Gy external whole pelvic dose, and followed by additional parametrial irradiation. The calculated biologically effective dose (BED) for tumor was 90 Gy 10 and for rectum below 125 Gy 3 . Results: In practice the MDR brachytherapy schedule achieved a tumor BED of 86 Gy 10 and a rectal BED of 101 Gy 3 . The latter was better than originally planned due to a reduction from 85% to 77% in the percentage of the mean dose to the rectum in relation to Point A. The mean overall treatment time was 10 days shorter for MDR in comparison with LDR. The 3-year actuarial central control for LDR and MDR was 97% and 98% (p = NS), respectively. The Grades 2 and 3 late complications (scale 0 to 3) were 1% and 2.4%, respectively for LDR (3-year) and MDR (2-year). Conclusions: LQ is a reliable tool for designing new schedules with altered fractionation and dose rates. The MDR schedule has proven to be an equivalent treatment schedule compared with LDR, with an additional advantage of having a shorter overall treatment time. The mean rectal BED Gy 3 was lower than expected

  6. Multivariate generalized linear mixed models using R

    Berridge, Damon Mark


    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  7. Research on the intermediate process of a free-piston linear generator from cold start-up to stable operation: Numerical model and experimental results

    Feng, Huihua; Guo, Chendong; Jia, Boru; Zuo, Zhengxing; Guo, Yuyao; Roskilly, Tony


    Highlights: • The intermediate process of free-piston linear generator is investigated for the first time. • “Gradually switching strategy” is the best strategy in the intermediate process. • Switching at the top dead center position timing has the least influences on free-piston linear generator. • After the intermediate process, the operation parameters value is smaller than those before the intermediate process. - Abstract: The free-piston linear generator (FPLG) has more merits than the traditional reciprocating engines (TRE), and has been under extensive investigation. Researchers mainly investigated on the starting process and the stable generating process of FPLG, while there has not been any report on the intermediate process from the engine cold start-up to stable operation process. Therefore, this paper investigated the intermediate process of the FPLG in terms of switching strategy and switching position based on simulation results and test results. Results showed that when the motor force of the linear electric machine (LEM) declined gradually from 100% to 0% with an interval of 50%, and then to a resistance force in the opposite direction of piston velocity (generator mode), the operation parameters of the FPLG showed minimal changes. Meanwhile, the engine operated more smoothly when the LEM switched its working mode from a motor to a generator at the piston dead center, compared with that at the middle stroke or a random switching time. More importantly, after the intermediate process, the operation parameters of FPLG were smaller than that before the intermediate process. As a result, a gradual motor/generator switching strategy was recommended and the LEM was suggested to switch its working mode when the piston arrived its dead center in order to achieve smooth engine operation.

  8. Linear Logistic Test Modeling with R

    Baghaei, Purya; Kubinger, Klaus D.


    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  9. The results of a non-linear mathematical model for the kinetics of 10B after BPA-F infusion in BNCT

    Ryynaenen, P.; Savolainen, S.; Hiismaeki, P.; Kangasmaeki, A.


    The aim of this study was to create a model for the kinetics of 10 B in glioma patients after p-boronophenylalanine fructose complex (BPA-F) infusion in order to predict the 10 B concentration in blood during the neutron irradiations in BNCT. The more specific aim was to create a flexible model that would work with variable infusion duration and variable amounts of infused BRA, by forehand carrying out only 1 to 2 kinetic studies per different trials. Previously used bi-exponential fitting and open compartmental model are capable, but, however, heavy kinetic studies are needed before they are reliable enough. A model probe with a memory effect based on phenomenological findings was created. The model development was based on the data from 10 glioblastoma multiforme patients from the Brookhaven National Laboratory BNCT trials. These patients received i.v. 290 mg BPA/kg body weight as a fructose complex during two hours. Blood samples were collected during and after the infusion. The accuracy of the model was verified with distinctive fitting of 10 new glioma patient data from the Finnish BNCT-trials. The 10 B- concentration in whole blood samples was determined by ICP-AES method. In the study it is concluded that the constructed non-linear model is flexible and capable in describing the kinetics of 10 B concentration in blood after a single infusion of BPA-F. (author)

  10. Decomposed Implicit Models of Piecewise - Linear Networks

    J. Brzobohaty


    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  11. Core seismic behaviour: linear and non-linear models

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.


    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  12. Modeling patterns in data using linear and related models

    Engelhardt, M.E.


    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  13. Composite Linear Models | Division of Cancer Prevention

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  14. Actuarial statistics with generalized linear mixed models

    Antonio, K.; Beirlant, J.


    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  15. Completeness Results for Linear Logic on Petri Nets

    Engberg, Uffe Henrik; Winskel, Glynn


    Completeness is shown for several versions of Girard's linear logic with respect to Petri nets as the class of models. The strongest logic considered is intuitionistic linear logic, with $otimes$, $-!circ$, &, $oplus$ and the exponential ! (''of course´´), and forms of quantification. This logic ...

  16. Influence of material non-linearity on the thermo-mechanical response of polymer foam cored sandwich structures - FE modelling and preliminary experiemntal results

    Palleti, Hara Naga Krishna Teja; Thomsen, Ole Thybo; Fruehmann, Richard.K

    In this paper, the polymer foam cored sandwich structures with fibre reinforced composite face sheets will be analyzed using the commercial FE code ABAQUS/Standard® incorporating the material and geometrical non-linearity. Large deformations are allowed which attributes geometric non linearity...

  17. Spaghetti Bridges: Modeling Linear Relationships

    Kroon, Cindy D.


    Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…

  18. Non-linear finite element modeling

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  19. Linear accelerator modeling: development and application

    Jameson, R.A.; Jule, W.D.


    Most of the parameters of a modern linear accelerator can be selected by simulating the desired machine characteristics in a computer code and observing how the parameters affect the beam dynamics. The code PARMILA is used at LAMPF for the low-energy portion of linacs. Collections of particles can be traced with a free choice of input distributions in six-dimensional phase space. Random errors are often included in order to study the tolerances which should be imposed during manufacture or in operation. An outline is given of the modifications made to the model, the results of experiments which indicate the validity of the model, and the use of the model to optimize the longitudinal tuning of the Alvarez linac

  20. Correlations and Non-Linear Probability Models

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt


    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  1. Extended Linear Models with Gaussian Priors

    Quinonero, Joaquin


    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  2. Linear mixed models for longitudinal data

    Molenberghs, Geert


    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  3. Linear mixed models in sensometrics

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  4. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad


    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  5. Linear causal modeling with structural equations

    Mulaik, Stanley A


    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  6. Matrix Tricks for Linear Statistical Models

    Puntanen, Simo; Styan, George PH


    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  7. Modeling digital switching circuits with linear algebra

    Thornton, Mitchell A


    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  8. Updating Linear Schedules with Lowest Cost: a Linear Programming Model

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata


    Many civil engineering projects involve sets of tasks repeated in a predefined sequence in a number of work areas along a particular route. A useful graphical representation of schedules of such projects is time-distance diagrams that clearly show what process is conducted at a particular point of time and in particular location. With repetitive tasks, the quality of project performance is conditioned by the ability of the planner to optimize workflow by synchronizing the works and resources, which usually means that resources are planned to be continuously utilized. However, construction processes are prone to risks, and a fully synchronized schedule may expire if a disturbance (bad weather, machine failure etc.) affects even one task. In such cases, works need to be rescheduled, and another optimal schedule should be built for the changed circumstances. This typically means that, to meet the fixed completion date, durations of operations have to be reduced. A number of measures are possible to achieve such reduction: working overtime, employing more resources or relocating resources from less to more critical tasks, but they all come at a considerable cost and affect the whole project. The paper investigates the problem of selecting the measures that reduce durations of tasks of a linear project so that the cost of these measures is kept to the minimum and proposes an algorithm that could be applied to find optimal solutions as the need to reschedule arises. Considering that civil engineering projects, such as road building, usually involve less process types than construction projects, the complexity of scheduling problems is lower, and precise optimization algorithms can be applied. Therefore, the authors put forward a linear programming model of the problem and illustrate its principle of operation with an example.

  9. Mathematical problems in non-linear Physics: some results


    The basic results presented in this report are the following: 1) Characterization of the range and Kernel of the variational derivative. 2) Determination of general conservation laws in linear evolution equations, as well as bounds for the number of polynomial conserved densities in non-linear evolution equations in two independent variables of even order. 3) Construction of the most general evolution equation which has a given family of conserved densities. 4) Regularity conditions for the validity of the Lie invariance method. 5) A simple class of perturbations in non-linear wave equations. 6) Soliton solutions in generalized KdV equations. (author)

  10. Nonlinear price impact from linear models

    Patzelt, Felix; Bouchaud, Jean-Philippe


    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  11. A linear model of ductile plastic damage

    Lemaitre, J.


    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  12. An easy way to obtain strong duality results in linear, linear semidefinite and linear semi-infinite programming

    Pop, P.C.; Still, Georg J.


    In linear programming it is known that an appropriate non-homogeneous Farkas Lemma leads to a short proof of the strong duality results for a pair of primal and dual programs. By using a corresponding generalized Farkas lemma we give a similar proof of the strong duality results for semidefinite

  13. Electron Model of Linear-Field FFAG

    Koscielniak, Shane R


    A fixed-field alternating-gradient accelerator (FFAG) that employs only linear-field elements ushers in a new regime in accelerator design and dynamics. The linear-field machine has the ability to compact an unprecedented range in momenta within a small component aperture. With a tune variation which results from the natural chromaticity, the beam crosses many strong, uncorrec-table, betatron resonances during acceleration. Further, relativistic particles in this machine exhibit a quasi-parabolic time-of-flight that cannot be addressed with a fixed-frequency rf system. This leads to a new concept of bucketless acceleration within a rotation manifold. With a large energy jump per cell, there is possibly strong synchro-betatron coupling. A few-MeV electron model has been proposed to demonstrate the feasibility of these untested acceleration features and to investigate them at length under a wide range of operating conditions. This paper presents a lattice optimized for a 1.3 GHz rf, initial technology choices f...

  14. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    Faraway, Julian J


    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  15. Latent log-linear models for handwritten digit classification.

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann


    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  16. Relevance of Linear Stability Results to Enhanced Oil Recovery

    Ding, Xueru; Daripa, Prabir


    How relevant can the results based on linear stability theory for any problem for that matter be to full scale simulation results? Put it differently, is the optimal design of a system based on linear stability results is optimal or even near optimal for the complex nonlinear system with certain objectives of interest in mind? We will address these issues in the context of enhanced oil recovery by chemical flooding. This will be based on an ongoing work. Supported by Qatar National Research Fund (a member of the Qatar Foundation).

  17. Ground Motion Models for Future Linear Colliders

    Seryi, Andrei


    Optimization of the parameters of a future linear collider requires comprehensive models of ground motion. Both general models of ground motion and specific models of the particular site and local conditions are essential. Existing models are not completely adequate, either because they are too general, or because they omit important peculiarities of ground motion. The model considered in this paper is based on recent ground motion measurements performed at SLAC and at other accelerator laboratories, as well as on historical data. The issues to be studied for the models to become more predictive are also discussed

  18. Application of the simplex method of linear programming model to ...

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  19. On-line control models for the Stanford Linear Collider

    Sheppard, J.C.; Helm, R.H.; Lee, M.J.; Woodley, M.D.


    Models for computer control of the SLAC three-kilometer linear accelerator and damping rings have been developed as part of the control system for the Stanford Linear Collider. Some of these models have been tested experimentally and implemented in the control program for routine linac operations. This paper will describe the development and implementation of these models, as well as some of the operational results

  20. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Naya, H; Peñagaricano, F; Urioste, J I


    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  1. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl


    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  2. Modelling point patterns with linear structures

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  3. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.


    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  4. Optimal designs for linear mixture models

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.


    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  5. Linear factor copula models and their properties

    Krupskii, Pavel; Genton, Marc G.


    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  6. Linear factor copula models and their properties

    Krupskii, Pavel


    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  7. Application of linearized model to the stability analysis of the pressurized water reactor

    Li Haipeng; Huang Xiaojin; Zhang Liangju


    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  8. Diagnostics for Linear Models With Functional Responses

    Xu, Hongquan; Shen, Qing


    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  9. Evaluating the double Poisson generalized linear model.

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique


    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. [From clinical judgment to linear regression model.

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O


    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  11. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    Härdle, W.K.; Mammen, E.; Müller, M.D.


    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  12. Modeling of Volatility with Non-linear Time Series Model

    Kim Song Yon; Kim Mun Chol


    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  13. Linear control theory for gene network modeling.

    Shin, Yong-Jun; Bleris, Leonidas


    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  14. Thresholding projection estimators in functional linear models

    Cardot, Hervé; Johannes, Jan


    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  15. Stochastic linear programming models, theory, and computation

    Kall, Peter


    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  16. Variance Function Partially Linear Single-Index Models1.

    Lian, Heng; Liang, Hua; Carroll, Raymond J


    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  17. Phylogenetic mixtures and linear invariants for equal input models.

    Casanellas, Marta; Steel, Mike


    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  18. Running vacuum cosmological models: linear scalar perturbations

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail:, E-mail: [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)


    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  19. Linear Parametric Model Checking of Timed Automata

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle


    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  20. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M


    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  1. Linear control theory for gene network modeling.

    Yong-Jun Shin

    Full Text Available Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain and linear state-space (time domain can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  2. Estimation and variable selection for generalized additive partial linear models

    Wang, Li


    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  3. Preliminary experimental results from a linear reciprocating magnetic refrigerator prototype

    Tagliafico, Luca Antonio; Scarpa, Federico; Valsuani, Federico; Tagliafico, Giulio


    A linear reciprocating magnetic refrigerator prototype was designed and built with the aid of an industrial partner. The refrigerator is based on the Active Magnetic Regenerative cycle, and exploits two regenerators working in parallel. The active material is Gadolinium in plates, 0.8 mm thick, for a total mass of 0.36 kg. The device is described and results about magnetic field and temperature span measurements are presented. The designed permanent magnet structure, based on an improved cross-type arrangement, generates a maximum magnetic field intensity of 1.55 T in air, over a gap of (13 × 50 × 100) mm 3 . The maximum temperature span achieved is 5.0 K, in a free run condition. -- Highlights: ► We give preliminary results from a linear reciprocating magnetic refrigerator prototype. ► The design is intended to process visualization and investigation. ► The prototype behavior gives us various suggestions to improve its general performance

  4. Multivariate statistical modelling based on generalized linear models

    Fahrmeir, Ludwig


    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  5. Approximating chiral quark models with linear σ-models

    Broniowski, Wojciech; Golli, Bojan


    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  6. Non-linear calibration models for near infrared spectroscopy

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten


    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  7. Aspects of general linear modelling of migration.

    Congdon, P


    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  8. Model Selection with the Linear Mixed Model for Longitudinal Data

    Ryoo, Ji Hoon


    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  9. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo


    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.


    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.


    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  11. Comparison between linear quadratic and early time dose models

    Chougule, A.A.; Supe, S.J.


    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  12. Preliminary results of Linear Induction Accelerator LIA-200

    Sharma, Archana; Senthil, K; Kumar, D D Praveen; Mitra, S; Sharma, V; Patel, A; Sharma, D K; Rehim, R; Kolge, T S; Saroj, P C; Acharya, S; Amitava, Roy; Rakhee, M; Nagesh, K V; Chakravarthy, D P, E-mail:, E-mail: [Accelerator and Pulse Power Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India)


    Repetitive Pulsed Power Technology is being developed keeping in mind the potential applications of this technology in material modifications, disinfections of water, timber, and food pasteurization etc. BARC has indigenously developed a Linear Induction Accelerator (LIA-200) rated for 200 kV, 4 kA, 100 ns, 10 Hz. The satisfactory performance of all the sub-systems including solid state power modulator, amorphous core based pulsed transformers, magnetic switches, water capacitors, water pulse- forming line, induction adder and field-emission diode have been demonstrated. This paper presents some design details and operational results of this pulsed power system. It also highlights the need for further research and development to build reliable and economic high-average power systems for industrial applications.

  13. Linear models in the mathematics of uncertainty

    Mordeson, John N; Clark, Terry D; Pham, Alex; Redmond, Michael A


    The purpose of this book is to present new mathematical techniques for modeling global issues. These mathematical techniques are used to determine linear equations between a dependent variable and one or more independent variables in cases where standard techniques such as linear regression are not suitable. In this book, we examine cases where the number of data points is small (effects of nuclear warfare), where the experiment is not repeatable (the breakup of the former Soviet Union), and where the data is derived from expert opinion (how conservative is a political party). In all these cases the data  is difficult to measure and an assumption of randomness and/or statistical validity is questionable.  We apply our methods to real world issues in international relations such as  nuclear deterrence, smart power, and cooperative threat reduction. We next apply our methods to issues in comparative politics such as successful democratization, quality of life, economic freedom, political stability, and fail...

  14. Generalized Linear Models in Vehicle Insurance

    Silvie Kafková


    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  15. Atmospheric Deposition Modeling Results

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  16. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming


    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  17. Low-energy limit of the extended Linear Sigma Model

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)


    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  18. A variational formulation for linear models in coupled dynamic thermoelasticity

    Feijoo, R.A.; Moura, C.A. de.


    A variational formulation for linear models in coupled dynamic thermoelasticity which quite naturally motivates the design of a numerical scheme for the problem, is studied. When linked to regularization or penalization techniques, this algorithm may be applied to more general models, namely, the ones that consider non-linear constraints associated to variational inequalities. The basic postulates of Mechanics and Thermodynamics as well as some well-known mathematical techniques are described. A thorough description of the algorithm implementation with the finite-element method is also provided. Proofs for existence and uniqueness of solutions and for convergence of the approximations are presented, and some numerical results are exhibited. (Author) [pt

  19. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.


    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  20. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich


    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  1. Mathematical modelling and linear stability analysis of laser fusion cutting

    Hermanns, Torsten; Schulz, Wolfgang [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Vossen, Georg [Niederrhein University of Applied Sciences, Chair for Applied Mathematics and Numerical Simulations, Reinarzstr.. 49, 47805 Krefeld (Germany); Thombansen, Ulrich [RWTH Aachen University, Chair for Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)


    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  2. Performances Of Estimators Of Linear Models With Autocorrelated ...

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  3. Performances of estimators of linear auto-correlated error model ...

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  4. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    Bruhn, Peter; Geyer-Schulz, Andreas


    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  5. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.


    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  6. From linear to generalized linear mixed models: A case study in repeated measures

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  7. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A


    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  8. Modelling and measurement of a moving magnet linear compressor performance

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul


    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  9. Metrical results on systems of small linear forms

    Hussain, M.; Kristensen, Simon

    In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function.......In this paper the metric theory of Diophantine approximation associated with the small linear forms is investigated. Khintchine--Groshev theorems are established along with Hausdorff measure generalization without the monotonic assumption on the approximating function....

  10. Practical likelihood analysis for spatial generalized linear mixed models

    Bonat, W. H.; Ribeiro, Paulo Justiniano


    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  11. Stochastic modeling of mode interactions via linear parabolized stability equations

    Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo


    Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.

  12. Results of fractionated stereotactic radiotherapy with linear accelerator

    Aoki, Masahiko; Watanabe, Sadao [Aomori Prefectural Central Hospital (Japan); Mariya, Yasushi [and others


    A lot of clinical data about stereotactic radiotherapy (SRT) were reported, however, standard fractionated schedules were not shown. In this paper, our clinical results of SRT, 3 fractions of 10 Gy, are reported. Between February 1992 and March 1995, we treated 41 patients with 7 arteriovenous malformations and 41 intracranial tumors using a stereotactic technique implemented by a standard 10MV X-ray linear accelerator. Average age was 47.4 years (range 3-80 years) and average follow-up time was 16.7 months (range 3.5-46.1 months). The patients received 3 fractions of 10 Gy for 3 days delivered by multiple arc narrow beams under 3 cm in width and length. A three-pieces handmade shell was used for head fixation without any anesthetic procedures. Three-dimensional treatment planning system (Focus) was applied for the dose calculation. All patients have received at least one follow-up radiographic study and one clinical examination. In four of the 7 patients with AVM the nidus has become smaller, 9 of the 21 patients with benign intracranial tumors and 9 of the 13 patients with intracranial malignant tumors have shown complete or partial response to the therapy. In 14 patients, diseases were stable or unevaluable due to the short follow-up time. In 5 patients (3 with astrocytoma, 1 each with meningioma and craniopharyngioma), diseases were progressive. Only 1 patient with falx meningioma had minor complication due to the symptomatic brain edema around the tumor. Although, further evaluation of target control (i.e. tumor and nidus) and late normal tissue damage is needed, preliminary clinical results indicate that SRT with our methods is safe and effective. (author)

  13. Deterministic operations research models and methods in linear optimization

    Rader, David J


    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  14. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia


    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  15. Petri Nets as Models of Linear Logic

    Engberg, Uffe Henrik; Winskel, Glynn


    The chief purpose of this paper is to appraise the feasibility of Girad's linear logic as a specification language for parallel processes. To this end we propose an interpretation of linear logic in Petri nets, with respect to which we investigate the expressive power of the logic...

  16. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Sahin, Rubina; Tapadia, Kavita


    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  17. Robust Linear Models for Cis-eQTL Analysis.

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C


    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  18. Linear approximation model network and its formation via ...

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  19. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Ho, Yuh-Shan


    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  20. Non-linear sigma model on the fuzzy supersphere

    Kurkcuoglu, Seckin


    In this note we develop fuzzy versions of the supersymmetric non-linear sigma model on the supersphere S (2,2) . In hep-th/0212133 Bott projectors have been used to obtain the fuzzy C P 1 model. Our approach utilizes the use of supersymmetric extensions of these projectors. Here we obtain these (super)-projectors and quantize them in a fashion similar to the one given in hep-th/0212133. We discuss the interpretation of the resulting model as a finite dimensional matrix model. (author)

  1. Random effect selection in generalised linear models

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  2. Further results on "Robust MPC using Linear Matrix Inequalities"

    Lazar, M.; Heemels, W.P.M.H.; Munoz de la Pena, D.; Alamo, T.


    This paper presents a novel method for designing the terminal cost and the auxiliary control law (ACL) for robust MPC of uncertain linear systems, such that ISS is a priori guaranteed for the closed-loop system. The method is based on the solution of a set of LMIs. An explicit relation is

  3. Linear regression crash prediction models : issues and proposed solutions.


    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  4. Game Theory and its Relationship with Linear Programming Models ...

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.


    Hasan YILDIZ


    Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.

  6. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    Yousif, Adil; Elfaki, Faiz


    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  7. Modeling and analysis of linear hyperbolic systems of balance laws

    Bartecki, Krzysztof


    This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...

  8. A Note on the Identifiability of Generalized Linear Mixed Models

    Labouriau, Rodrigo


    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  9. Linear models for joint association and linkage QTL mapping

    Fernando Rohan L


    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.




    Previous analyses have assumed that wedge absorbers are triangularly shaped with equal angles for the two faces. In this case, to linear order, the energy loss depends only on the position in the direction of the face tilt, and is independent of the incoming angle. One can instead construct an absorber with entrance and exit faces facing rather general directions. In this case, the energy loss can depend on both the position and the angle of the particle in question. This paper demonstrates that and computes the effect to linear order.

  11. Linear Model for Optimal Distributed Generation Size Predication

    Ahmed Al Ameri


    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  12. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  13. A reexamination of some puzzling results in linearized elasticity

    University of North Carolina at Charlotte, Charlotte, NC 28223-0001, USA e-mail:; ..... ˆT (F) = C[ϵ] + o(∇u), where ϵ = [∇u+(∇u)T ]/2, and C = D ˆT (I) is the elasticity tensor, and one also linearizes the body force vector to get b = QT [ b∗ − ¨c. ] − ˙ × X − × ( × X) − 2 × v,. (5) where X is the position ...

  14. Electromagnetic axial anomaly in a generalized linear sigma model

    Fariborz, Amir H.; Jora, Renata


    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  15. Tried and True: Springing into Linear Models

    Darling, Gerald


    In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…

  16. A penalized framework for distributed lag non-linear models.

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G


    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  17. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Bambang Riyanto


    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  18. Ordinal Log-Linear Models for Contingency Tables

    Brzezińska Justyna


    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  19. Alignment of the Stanford Linear Collider Arcs: Concepts and results

    Pitthan, R.; Bell, B.; Friedsam, H.; Pietryka, M.; Oren, W.; Ruland, R.


    The alignment of the Arcs for the Stanford Linear Collider at SLAC has posed problems in accelerator survey and alignment not encountered before. These problems come less from the tight tolerances of 0.1 mm, although reaching such a tight statistically defined accuracy in a controlled manner is difficult enough, but from the absence of a common reference plane for the Arcs. Traditional circular accelerators, including HERA and LEP, have been designed in one plane referenced to local gravity. For the SLC Arcs no such single plane exists. Methods and concepts developed to solve these and other problems, connected with the unique design of SLC, range from the first use of satellites for accelerator alignment, use of electronic laser theodolites for placement of components, computer control of the manual adjustment process, complete automation of the data flow incorporating the most advanced concepts of geodesy, strict separation of survey and alignment, to linear principal component analysis for the final statistical smoothing of the mechanical components

  20. Recent Updates to the GEOS-5 Linear Model

    Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul


    Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.

  1. Double generalized linear compound poisson models to insurance claims data

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo


    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  2. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Luo, Wen; Azen, Razia


    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  3. Thurstonian models for sensory discrimination tests as generalized linear models

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen


    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  4. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Masudul Islam


    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  5. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina


    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  6. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.


    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  7. Dynamic generalized linear models for monitoring endemic diseases

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq


    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  8. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne


    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  9. Linear and non-linear autoregressive models for short-term wind speed forecasting

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.


    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  10. Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region

    Mazurek, Grzegorz; Iwański, Marek


    Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105 controlled strain mode. The fixed strain level was set at 25με to guarantee that the stiffness modulus of the asphalt concrete would be tested in a linear viscoelasticity range. The master curve was formed using the time-temperature superposition principle (TTSP). The stiffness modulus of asphalt concrete was determined at temperatures 10°C, 20°C and 40°C and at loading times (frequency) of 0.1, 0.3, 1, 3, 10, 20 Hz. The model parameters were fitted to the rheological models using the original programs based on the nonlinear least squares sum method. All the rheological models under analysis were found to be capable of predicting changes in the stiffness modulus of the reference asphalt concrete to satisfactory accuracy. In the cases of the fractional model and the generalized Maxwell model, their accuracy depends on a number of elements in series. The best fit was registered for Bahia and co-workers model, generalized Maxwell model and fractional model. As for predicting the phase angle parameter, the largest discrepancies between experimental and modelled results were obtained using the fractional model. Except the Burgers model, the model matching quality was

  11. Applicability of linear and non-linear potential flow models on a Wavestar float

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino


    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  12. A linear model of population dynamics

    Lushnikov, A. A.; Kagan, A. I.


    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  13. Comparison of Linear Prediction Models for Audio Signals


    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  14. A quasi-linear gyrokinetic transport model for tokamak plasmas

    Casati, A.


    After a presentation of some basics around nuclear fusion, this research thesis introduces the framework of the tokamak strategy to deal with confinement, hence the main plasma instabilities which are responsible for turbulent transport of energy and matter in such a system. The author also briefly introduces the two principal plasma representations, the fluid and the kinetic ones. He explains why the gyro-kinetic approach has been preferred. A tokamak relevant case is presented in order to highlight the relevance of a correct accounting of the kinetic wave-particle resonance. He discusses the issue of the quasi-linear response. Firstly, the derivation of the model, called QuaLiKiz, and its underlying hypotheses to get the energy and the particle turbulent flux are presented. Secondly, the validity of the quasi-linear response is verified against the nonlinear gyro-kinetic simulations. The saturation model that is assumed in QuaLiKiz, is presented and discussed. Then, the author qualifies the global outcomes of QuaLiKiz. Both the quasi-linear energy and the particle flux are compared to the expectations from the nonlinear simulations, across a wide scan of tokamak relevant parameters. Therefore, the coupling of QuaLiKiz within the integrated transport solver CRONOS is presented: this procedure allows the time-dependent transport problem to be solved, hence the direct application of the model to the experiment. The first preliminary results regarding the experimental analysis are finally discussed

  15. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo


    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  16. Neutron stars in non-linear coupling models

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)


    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  17. Linear theory for filtering nonlinear multiscale systems with model error.

    Berry, Tyrus; Harlim, John


    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online , as part of a filtering

  18. A non-linear model of information seeking behaviour

    Allen E. Foster


    Full Text Available The results of a qualitative, naturalistic, study of information seeking behaviour are reported in this paper. The study applied the methods recommended by Lincoln and Guba for maximising credibility, transferability, dependability, and confirmability in data collection and analysis. Sampling combined purposive and snowball methods, and led to a final sample of 45 inter-disciplinary researchers from the University of Sheffield. In-depth semi-structured interviews were used to elicit detailed examples of information seeking. Coding of interview transcripts took place in multiple iterations over time and used Atlas-ti software to support the process. The results of the study are represented in a non-linear Model of Information Seeking Behaviour. The model describes three core processes (Opening, Orientation, and Consolidation and three levels of contextual interaction (Internal Context, External Context, and Cognitive Approach, each composed of several individual activities and attributes. The interactivity and shifts described by the model show information seeking to be non-linear, dynamic, holistic, and flowing. The paper concludes by describing the whole model of behaviours as analogous to an artist's palette, in which activities remain available throughout information seeking. A summary of key implications of the model and directions for further research are included.

  19. A test for the parameters of multiple linear regression models ...

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  20. Modeling Non-Linear Material Properties in Composite Materials


    Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions

  1. Technical note: A linear model for predicting δ13 Cprotein.

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M


    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  2. Reliability modelling and simulation of switched linear system ...

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  3. INTRAVAL test case 1b - modelling results

    Jakob, A.; Hadermann, J.


    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  4. Predicting Madura cattle growth curve using non-linear model

    Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.


    Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (plogistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.

  5. Predicting birth weight with conditionally linear transformation models.

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten


    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  6. Linear Regression Models for Estimating True Subsurface ...


    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  7. Numerical modelling in non linear fracture mechanics

    Viggo Tvergaard


    Full Text Available Some numerical studies of crack propagation are based on using constitutive models that accountfor damage evolution in the material. When a critical damage value has been reached in a materialpoint, it is natural to assume that this point has no more carrying capacity, as is done numerically in the elementvanish technique. In the present review this procedure is illustrated for micromechanically based materialmodels, such as a ductile failure model that accounts for the nucleation and growth of voids to coalescence, and a model for intergranular creep failure with diffusive growth of grain boundary cavities leading to micro-crack formation. The procedure is also illustrated for low cycle fatigue, based on continuum damage mechanics. In addition, the possibility of crack growth predictions for elastic-plastic solids using cohesive zone models to represent the fracture process is discussed.

  8. Wavefront Sensing for WFIRST with a Linear Optical Model

    Jurling, Alden S.; Content, David A.


    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  9. Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control

    Zhou, Jianjun; Conrad, Finn


    The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....

  10. Dose fractionated gamma knife radiosurgery for large arteriovenous malformations on daily or alternate day schedule outside the linear quadratic model: Proof of concept and early results. A substitute to volume fractionation.

    Mukherjee, Kanchan Kumar; Kumar, Narendra; Tripathi, Manjul; Oinam, Arun S; Ahuja, Chirag K; Dhandapani, Sivashanmugam; Kapoor, Rakesh; Ghoshal, Sushmita; Kaur, Rupinder; Bhatt, Sandeep


    To evaluate the feasibility, safety and efficacy of dose fractionated gamma knife radiosurgery (DFGKRS) on a daily schedule beyond the linear quadratic (LQ) model, for large volume arteriovenous malformations (AVMs). Between 2012-16, 14 patients of large AVMs (median volume 26.5 cc) unsuitable for surgery or embolization were treated in 2-3 of DFGKRS sessions. The Leksell G frame was kept in situ during the whole procedure. 86% (n = 12) patients had radiologic evidence of bleed, and 43% (n = 6) had presented with a history of seizures. 57% (n = 8) patients received a daily treatment for 3 days and 43% (n = 6) were on an alternate day (2 fractions) regimen. The marginal dose was split into 2 or 3 fractions of the ideal prescription dose of a single fraction of 23-25 Gy. The median follow up period was 35.6 months (8-57 months). In the three-fraction scheme, the marginal dose ranged from 8.9-11.5 Gy, while in the two-fraction scheme, the marginal dose ranged from 11.3-15 Gy at 50% per fraction. Headache (43%, n = 6) was the most common early postoperative complication, which was controlled with short course steroids. Follow up evaluation of at least three years was achieved in seven patients, who have shown complete nidus obliteration in 43% patients while the obliteration has been in the range of 50-99% in rest of the patients. Overall, there was a 67.8% reduction in the AVM volume at 3 years. Nidus obliteration at 3 years showed a significant rank order correlation with the cumulative prescription dose (p 0.95, P value 0.01), with attainment of near-total (more than 95%) obliteration rates beyond 29 Gy of the cumulative prescription dose. No patient receiving a cumulative prescription dose of less than 31 Gy had any severe adverse reaction. In co-variate adjusted ordinal regression, only the cumulative prescription dose had a significant correlation with common terminology criteria for adverse events (CTCAE) severity (P value 0.04), independent of age, AVM volume

  11. Model Order Reduction for Non Linear Mechanics

    Pinillo, Rubén


    Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.

  12. Identification of Influential Points in a Linear Regression Model

    Jan Grosz


    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  13. Heterotic sigma models and non-linear strings

    Hull, C.M.


    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  14. Linear latent variable models: the lava-package

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben


    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  15. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Samar Hayat Khan Tareen


    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  16. Generalized Linear Models with Applications in Engineering and the Sciences

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J


    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  17. Modelling a linear PM motor including magnetic saturation

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.


    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  18. Linear system identification via backward-time observer models

    Juang, Jer-Nan; Phan, Minh


    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  19. Linear mixing model applied to AVHRR LAC data

    Holben, Brent N.; Shimabukuro, Yosio E.


    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  20. Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics

    Wang, John T.


    The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.

  1. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans


    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Modelling non-linear effects of dark energy

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis


    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  3. Non Linear Modelling and Control of Hydraulic Actuators

    B. Šulc


    Full Text Available This paper deals with non-linear modelling and control of a differential hydraulic actuator. The nonlinear state space equations are derived from basic physical laws. They are more powerful than the transfer function in the case of linear models, and they allow the application of an object oriented approach in simulation programs. The effects of all friction forces (static, Coulomb and viscous have been modelled, and many phenomena that are usually neglected are taken into account, e.g., the static term of friction, the leakage between the two chambers and external space. Proportional Differential (PD and Fuzzy Logic Controllers (FLC have been applied in order to make a comparison by means of simulation. Simulation is performed using Matlab/Simulink, and some of the results are compared graphically. FLC is tuned in a such way that it produces a constant control signal close to its maximum (or minimum, where possible. In the case of PD control the occurrence of peaks cannot be avoided. These peaks produce a very high velocity that oversteps the allowed values.

  4. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Almedeij, Jaber


    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  5. Linear approximation model network and its formation via ...

    niques, an alternative `linear approximation model' (LAM) network approach is .... network is LPV, existing LTI theory is difficult to apply (Kailath 1980). ..... Beck J V, Arnold K J 1977 Parameter estimation in engineering and science (New York: ...

  6. Sphaleron in a non-linear sigma model

    Sogo, Kiyoshi; Fujimoto, Yasushi.


    We present an exact classical saddle point solution in a non-linear sigma model. It has a topological charge 1/2 and mediates the vacuum transition. The quantum fluctuations and the transition rate are also examined. (author)

  7. On D-branes from gauged linear sigma models

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.


    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  8. Optimization for decision making linear and quadratic models

    Murty, Katta G


    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  9. Study of linear induction motor characteristics : the Mosebach model


    This report covers the Mosebach theory of the double-sided linear induction motor, starting with the ideallized model and accompanying assumptions, and ending with relations for thrust, airgap power, and motor efficiency. Solutions of the magnetic in...

  10. Study of linear induction motor characteristics : the Oberretl model


    The Oberretl theory of the double-sided linear induction motor (LIM) is examined, starting with the idealized model and accompanying assumptions, and ending with relations for predicted thrust, airgap power, and motor efficiency. The effect of varyin...

  11. Non linear permanent magnets modelling with the finite element method

    Chavanne, J.; Meunier, G.; Sabonnadiere, J.C.


    In order to perform the calculation of permanent magnets with the finite element method, it is necessary to take into account the anisotropic behaviour of hard magnetic materials (Ferrites, NdFeB, SmCo5). In linear cases, the permeability of permanent magnets is a tensor. This one is fully described with the permeabilities parallel and perpendicular to the easy axis of the magnet. In non linear cases, the model uses a texture function which represents the distribution of the local easy axis of the cristallytes of the magnet. This function allows a good representation of the angular dependance of the coercitive field of the magnet. As a result, it is possible to express the magnetic induction B and the tensor as functions of the field and the texture parameter. This model has been implemented in the software FLUX3D where the tensor is used for the Newton-Raphson procedure. 3D demagnetization of a ferrite magnet by a NdFeB magnet is a suitable representative example. They analyze the results obtained for an ideally oriented ferrite magnet and a real one using a measured texture parameter

  12. Study of the critical behavior of the O(N) linear and nonlinear sigma models

    Graziani, F.R.


    A study of the large N behavior of both the O(N) linear and nonlinear sigma models is presented. The purpose is to investigate the relationship between the disordered (ordered) phase of the linear and nonlinear sigma models. Utilizing operator product expansions and stability analyses, it is shown that for 2 - (lambda/sub R/(M) is the dimensionless renormalized quartic coupling and lambda* is the IR fixed point) limit of the linear sigma model which yields the nonlinear sigma model. It is also shown that stable large N linear sigma models with lambda 0) and nonlinear models are trivial. This result (i.e., triviality) is well known but only for one and two component models. Interestingly enough, the lambda< d = 4 linear sigma model remains nontrivial and tachyonic free

  13. Optimization Research of Generation Investment Based on Linear Programming Model

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  14. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    Liang, Faming; Song, Qifan; Yu, Kai


    criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening

  15. Probabilistic model of ligaments and tendons: Quasistatic linear stretching

    Bontempi, M.


    Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers’ structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.

  16. Linear mixing model applied to coarse resolution satellite data

    Holben, Brent N.; Shimabukuro, Yosio E.


    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  17. Synthetic Domain Theory and Models of Linear Abadi & Plotkin Logic

    Møgelberg, Rasmus Ejlers; Birkedal, Lars; Rosolini, Guiseppe


    Plotkin suggested using a polymorphic dual intuitionistic/linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L. Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can...... reason using parametricity and, for example, solve a large class of domain equations, as suggested by Plotkin.In this paper, we show how an interpretation of a strict version of Bierman, Pitts and Russo's language Lily into synthetic domain theory presented by Simpson and Rosolini gives rise...... to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion, suitable for treating many different parametric models, and it provides formal proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results...

  18. Generalized linear mixed models modern concepts, methods and applications

    Stroup, Walter W


    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  19. A comparison of linear tyre models for analysing shimmy

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.


    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  20. Unification of three linear models for the transient visual system

    Brinker, den A.C.


    Three different linear filters are considered as a model describing the experimentally determined triphasic impulse responses of discs. These impulse responses arc associated with the transient visual system. Each model reveals a different feature of the system. Unification of the models is




    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both

  2. Linearized models for a new magnetic control in MAST

    Artaserse, G., E-mail: [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)


    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  3. Linearized models for a new magnetic control in MAST

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.


    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  4. Behavioral and macro modeling using piecewise linear techniques

    Kruiskamp, M.W.; Leenaerts, D.M.W.; Antao, B.


    In this paper we will demonstrate that most digital, analog as well as behavioral components can be described using piecewise linear approximations of their real behavior. This leads to several advantages from the viewpoint of simulation. We will also give a method to store the resulting linear

  5. Linear versus quadratic portfolio optimization model with transaction cost

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah


    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  6. Phenomenology of non-minimal supersymmetric models at linear colliders

    Porto, Stefano


    The focus of this thesis is on the phenomenology of several non-minimal supersymmetric models in the context of future linear colliders (LCs). Extensions of the minimal supersymmetric Standard Model (MSSM) may accommodate the observed Higgs boson mass at about 125 GeV in a more natural way than the MSSM, with a richer phenomenology. We consider both F-term extensions of the MSSM, as for instance the non-minimal supersymmetric Standard Model (NMSSM), as well as D-terms extensions arising at low energies from gauge extended supersymmetric models. The NMSSM offers a solution to the μ-problem with an additional gauge singlet supermultiplet. The enlarged neutralino sector of the NMSSM can be accurately studied at a LC and used to distinguish the model from the MSSM. We show that exploiting the power of the polarised beams of a LC can be used to reconstruct the neutralino and chargino sector and eventually distinguish the NMSSM even considering challenging scenarios that resemble the MSSM. Non-decoupling D-terms extensions of the MSSM can raise the tree-level Higgs mass with respect to the MSSM. This is done through additional contributions to the Higgs quartic potential, effectively generated by an extended gauge group. We study how this can happen and we show how these additional non-decoupling D-terms affect the SM-like Higgs boson couplings to fermions and gauge bosons. We estimate how the deviations from the SM couplings can be spotted at the Large Hadron Collider (LHC) and at the International Linear Collider (ILC), showing how the ILC would be suitable for the model identication. Since our results prove that a linear collider is a fundamental machine for studying supersymmetry phenomenology at a high level of precision, we argue that also a thorough comprehension of the physics at the interaction point (IP) of a LC is needed. Therefore, we finally consider the possibility of observing intense electromagnetic field effects and nonlinear quantum electrodynamics

  7. H∞ /H2 model reduction through dilated linear matrix inequalities

    Adegas, Fabiano Daher; Stoustrup, Jakob


    This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...

  8. Non-linear Growth Models in Mplus and SAS

    Grimm, Kevin J.; Ram, Nilam


    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  9. A Dynamic Linear Modeling Approach to Public Policy Change

    Loftis, Matthew; Mortensen, Peter Bjerre


    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  10. Characteristics and Properties of a Simple Linear Regression Model

    Kowal Robert


    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  11. Matrix model and time-like linear dila ton matter

    Takayanagi, Tadashi


    We consider a matrix model description of the 2d string theory whose matter part is given by a time-like linear dilaton CFT. This is equivalent to the c=1 matrix model with a deformed, but very simple Fermi surface. Indeed, after a Lorentz transformation, the corresponding 2d spacetime is a conventional linear dila ton background with a time-dependent tachyon field. We show that the tree level scattering amplitudes in the matrix model perfectly agree with those computed in the world-sheet theory. The classical trajectories of fermions correspond to the decaying D-boranes in the time-like linear dilaton CFT. We also discuss the ground ring structure. Furthermore, we study the properties of the time-like Liouville theory by applying this matrix model description. We find that its ground ring structure is very similar to that of the minimal string. (author)

  12. Vortices, semi-local vortices in gauged linear sigma model

    Kim, Namkwon


    We consider the static (2+1)D gauged linear sigma model. By analyzing the governing system of partial differential equations, we investigate various aspects of the model. We show the existence of energy finite vortices under a partially broken symmetry on R 2 with the necessary condition suggested by Y. Yang. We also introduce generalized semi-local vortices and show the existence of energy finite semi-local vortices under a certain condition. The vacuum manifold for the semi-local vortices turns out to be graded. Besides, with a special choice of a representation, we show that the O(3) sigma model of which target space is nonlinear is a singular limit of the gauged linear sigma model of which target space is linear. (author)

  13. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T


    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  14. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    Martinez-Luaces, Victor E.


    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  15. Optical linear algebra processors - Noise and error-source modeling

    Casasent, D.; Ghosh, A.


    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. Optical linear algebra processors: noise and error-source modeling.

    Casasent, D; Ghosh, A


    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.


    Oana CHIVU


    Full Text Available The present paper is concerned with the main modeling elements as produced by means of thefinite element method of linear ultrasonic motors. Hence, first the model is designed and then a modaland harmonic analysis are carried out in view of outlining the main outcomes

  18. Linear and Nonlinear Career Models: Metaphors, Paradigms, and Ideologies.

    Buzzanell, Patrice M.; Goldzwig, Steven R.


    Examines the linear or bureaucratic career models (dominant in career research, metaphors, paradigms, and ideologies) which maintain career myths of flexibility and individualized routes to success in organizations incapable of offering such versatility. Describes nonlinear career models which offer suggestive metaphors for re-visioning careers…

  19. Model structure learning: A support vector machine approach for LPV linear-regression models

    Toth, R.; Laurain, V.; Zheng, W-X.; Poolla, K.


    Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of a set of functional dependencies for the parametrization of the model coefficients. Inaccurate selection leads to structural bias while over-parametrization results in a variance

  20. Linear model applied to the evaluation of pharmaceutical stability data

    Renato Cesar Souza


    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  1. Influence of the void fraction in the linear reactivity model

    Castillo, J.A.; Ramirez, J.R.; Alonso, G.


    The linear reactivity model allows the multicycle analysis in pressurized water reactors in a simple and quick way. In the case of the Boiling water reactors the void fraction it varies axially from 0% of voids in the inferior part of the fuel assemblies until approximately 70% of voids to the exit of the same ones. Due to this it is very important the determination of the average void fraction during different stages of the reactor operation to predict the burnt one appropriately of the same ones to inclination of the pattern of linear reactivity. In this work a pursuit is made of the profile of power for different steps of burnt of a typical operation cycle of a Boiling water reactor. Starting from these profiles it builds an algorithm that allows to determine the voids profile and this way to obtain the average value of the same one. The results are compared against those reported by the CM-PRESTO code that uses another method to carry out this calculation. Finally, the range in which is the average value of the void fraction during a typical cycle is determined and an estimate of the impact that it would have the use of this value in the prediction of the reactivity produced by the fuel assemblies is made. (Author)

  2. Human visual modeling and image deconvolution by linear filtering

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.


    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  3. Convergence diagnostics for Eigenvalue problems with linear regression model

    Shi, Bo; Petrovic, Bojan


    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  4. Baryon and meson phenomenology in the extended Linear Sigma Model

    Giacosa, Francesco; Habersetzer, Anja; Teilab, Khaled; Eshraim, Walaa; Divotgey, Florian; Olbrich, Lisa; Gallas, Susanna; Wolkanowski, Thomas; Janowski, Stanislaus; Heinz, Achim; Deinet, Werner; Rischke, Dirk H. [Institute for Theoretical Physics, J. W. Goethe University, Max-von-Laue-Str. 1, 60438 Frankfurt am Main (Germany); Kovacs, Peter; Wolf, Gyuri [Institute for Particle and Nuclear Physics, Wigner Research Center for Physics, Hungarian Academy of Sciences, H-1525 Budapest (Hungary); Parganlija, Denis [Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)


    The vacuum phenomenology obtained within the so-called extended Linear Sigma Model (eLSM) is presented. The eLSM Lagrangian is constructed by including from the very beginning vector and axial-vector d.o.f., and by requiring dilatation invariance and chiral symmetry. After a general introduction of the approach, particular attention is devoted to the latest results. In the mesonic sector the strong decays of the scalar and the pseudoscalar glueballs, the weak decays of the tau lepton into vector and axial-vector mesons, and the description of masses and decays of charmed mesons are shown. In the baryonic sector the omega production in proton-proton scattering and the inclusion of baryons with strangeness are described.

  5. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

    R. Barbiero


    Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

  6. Non-linear characterisation of the physical model of an ancient masonry bridge

    Fragonara, L Zanotti; Ceravolo, R; Matta, E; Quattrone, A; De Stefano, A; Pecorelli, M


    This paper presents the non-linear investigations carried out on a scaled model of a two-span masonry arch bridge. The model has been built in order to study the effect of the central pile settlement due to riverbank erosion. Progressive damage was induced in several steps by applying increasing settlements at the central pier. For each settlement step, harmonic shaker tests were conducted under different excitation levels, this allowing for the non-linear identification of the progressively damaged system. The shaker tests have been performed at resonance with the modal frequency of the structure, which were determined from a previous linear identification. Estimated non-linearity parameters, which result from the systematic application of restoring force based identification algorithms, can corroborate models to be used in the reassessment of existing structures. The method used for non-linear identification allows monitoring the evolution of non-linear parameters or indicators which can be used in damage and safety assessment.

  7. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    Bernstein, Andrey; Dall' Anese, Emiliano


    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- from advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.

  8. Differentiability of Palmer's linearization Theorem and converse result for density functions

    Castañeda, Alvaro; Robledo, Gonzalo


    We study differentiability properties in a particular case of the Palmer's linearization Theorem, which states the existence of an homeomorphism $H$ between the solutions of a linear ODE system having exponential dichotomy and a quasilinear system. Indeed, if the linear system is uniformly asymptotically stable, sufficient conditions ensuring that $H$ is a $C^{2}$ preserving orientation diffeomorphism are given. As an application, we generalize a converse result of density functions for a non...

  9. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  10. The minimal linear σ model for the Goldstone Higgs

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.


    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  11. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian


    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  12. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.


    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  13. Reconstruction of real-space linear matter power spectrum from multipoles of BOSS DR12 results

    Lee, Seokcheon


    Recently, the power spectrum (PS) multipoles using the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) sample are analyzed [1]. The based model for the analysis is the so-called TNS quasi-linear model and the analysis provides the multipoles up to the hexadecapole [2]. Thus, one might be able to recover the real-space linear matter PS by using the combinations of multipoles to investigate the cosmology [3]. We provide the analytic form of the ratio of quadrupole (hexadecapole) to monopole moments of the quasi-linear PS including the Fingers-of-God (FoG) effect to recover the real-space PS in the linear regime. One expects that observed values of the ratios of multipoles should be consistent with those of the linear theory at large scales. Thus, we compare the ratios of multipoles of the linear theory, including the FoG effect with the measured values. From these, we recover the linear matter power spectra in real-space. These recovered power spectra are consistent with the linear matter power spectra.

  14. Functional linear models for association analysis of quantitative traits.

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao


    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  15. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver


    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.


    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not limited to a certain type of balancing, and they are applicable for different types of balancing corresponding to different equations, like Lyapunov or Riccati equations. The results obtained are ...

  17. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne


    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  18. Aeroelastic Limit-Cycle Oscillations resulting from Aerodynamic Non-Linearities

    van Rooij, A.C.L.M.


    Aerodynamic non-linearities, such as shock waves, boundary layer separation or boundary layer transition, may cause an amplitude limitation of the oscillations induced by the fluid flow around a structure. These aeroelastic limit-cycle oscillations (LCOs) resulting from aerodynamic non-linearities

  19. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    Yavari, M., E-mail: [Islamic Azad University, Kashan Branch (Iran, Islamic Republic of)


    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  20. Linear summation of outputs in a balanced network model of motor cortex.

    Capaday, Charles; van Vreeswijk, Carl


    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  1. Linear modeling of possible mechanisms for parkinson tremor generation

    Lohnberg, P.


    The power of Parkinson tremor is expressed in terms of possibly changed frequency response functions between relevant variables in the neuromuscular system. The derivation starts out from a linear loopless equivalent model of mechanisms for general tremor generation. Hypothetical changes in this

  2. Current algebra of classical non-linear sigma models

    Forger, M.; Laartz, J.; Schaeper, U.


    The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.)

  3. Non Linear signa models probing the string structure

    Abdalla, E.


    The introduction of a term depending on the extrinsic curvature to the string action, and related non linear sigma models defined on a symmetric space SO(D)/SO(2) x SO(d-2) is descussed . Coupling to fermions are also treated. (author) [pt

  4. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Wagler, Amy E.


    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  5. Penalized Estimation in Large-Scale Generalized Linear Array Models

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard


    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  6. Expressions for linearized perturbations in ideal-fluid cosmological models

    Ratra, B.


    We present closed-form solutions of the relativistic linear perturbation equations (in synchronous gauge) that govern the evolution of inhomogeneities in homogeneous, spatially flat, ideal-fluid, cosmological models. These expressions, which are valid for irregularities on any scale, allow one to analytically interpolate between the known approximate solutions which are valid at early times and at late times

  7. S-AMP for non-linear observation models

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.


    Recently we presented the S-AMP approach, an extension of approximate message passing (AMP), to be able to handle general invariant matrix ensembles. In this contribution we extend S-AMP to non-linear observation models. We obtain generalized AMP (GAMP) as the special case when the measurement...

  8. Plane answers to complex questions the theory of linear models

    Christensen, Ronald


    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  9. A simulation model of a coordinated decentralized linear supply chain

    Ashayeri, Jalal; Cannella, S.; Lopez Campos, M.; Miranda, P.A.


    This paper presents a simulation-based study of a coordinated, decentralized linear supply chain (SC) system. In the proposed model, any supply tier considers its successors as part of its inventory system and generates replenishment orders on the basis of its partners’ operational information. We

  10. A non-linear dissipative model of magnetism

    Durand, P.; Paidarová, Ivana


    Roč. 89, č. 6 (2010), s. 67004 ISSN 1286-4854 R&D Projects: GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : non-linear dissipative model of magnetism * thermodynamics * physical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry

  11. Modeling and verifying non-linearities in heterodyne displacement interferometry

    Cosijns, S.J.A.G.; Haitjema, H.; Schellekens, P.H.J.


    The non-linearities in a heterodyne laser interferometer system occurring from the phase measurement system of the interferometer andfrom non-ideal polarization effects of the optics are modeled into one analytical expression which includes the initial polarization state ofthe laser source, the

  12. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    Holst, René; Jørgensen, Bent


    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  13. Finite element modeling of nanotube structures linear and non-linear models

    Awang, Mokhtar; Muhammad, Ibrahim Dauda


    This book presents a new approach to modeling carbon structures such as graphene and carbon nanotubes using finite element methods, and addresses the latest advances in numerical studies for these materials. Based on the available findings, the book develops an effective finite element approach for modeling the structure and the deformation of grapheme-based materials. Further, modeling processing for single-walled and multi-walled carbon nanotubes is demonstrated in detail.

  14. Linear Dynamics Model for Steam Cooled Fast Power Reactors

    Vollmer, H


    A linear analytical dynamic model is developed for steam cooled fast power reactors. All main components of such a plant are investigated on a general though relatively simple basis. The model is distributed in those parts concerning the core but lumped as to the external plant components. Coolant is considered as compressible and treated by the actual steam law. Combined use of analogue and digital computer seems most attractive.

  15. One-loop dimensional reduction of the linear σ model

    Malbouisson, A.P.C.; Silva-Neto, M.B.; Svaiter, N.F.


    We perform the dimensional reduction of the linear σ model at one-loop level. The effective of the reduced theory obtained from the integration over the nonzero Matsubara frequencies is exhibited. Thermal mass and coupling constant renormalization constants are given, as well as the thermal renormalization group which controls the dependence of the counterterms on the temperature. We also recover, for the reduced theory, the vacuum instability of the model for large N. (author)

  16. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei; Alkhalifah, Tariq Ali


    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen's parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  17. Transport coefficients from SU(3) Polyakov linearmodel

    Tawfik, A.; Diab, A.


    In the mean field approximation, the grand potential of SU(3) Polyakov linearmodel (PLSM) is analyzed for the order parameter of the light and strange chiral phase-transitions, σ l and σ s , respectively, and for the deconfinement order parameters φ and φ*. Furthermore, the subtracted condensate Δ l,s and the chiral order-parameters M b are compared with lattice QCD calculations. By using the dynamical quasiparticle model (DQPM), which can be considered as a system of noninteracting massive quasiparticles, we have evaluated the decay width and the relaxation time of quarks and gluons. In the framework of LSM and with Polyakov loop corrections included, the interaction measure Δ/T 4 , the specific heat c v and speed of sound squared c s 2 have been determined, as well as the temperature dependence of the normalized quark number density n q /T 3 and the quark number susceptibilities χ q /T 2 at various values of the baryon chemical potential. The electric and heat conductivity, σ e and κ, and the bulk and shear viscosities normalized to the thermal entropy, ζ/s and η/s, are compared with available results of lattice QCD calculations.

  18. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua


    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  19. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W


    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  20. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)


    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  1. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.


    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  2. Recent results on stability and response bounds of linear systems - a review

    Pommer, Christian; Kliem, Wolfhard


    The literature on linear systems emerging from second order differential equations is extensive because such systems are ubiquitous in modeling, particularly modeling of mechanical systems. This paper offers an overview of some of the recent research in this field, in particular on the subject...

  3. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun


    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  4. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin


    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  5. A fuzzy Bi-linear management model in reverse logistic chains

    Tadić Danijela


    Full Text Available The management of the electrical and electronic waste (WEEE problem in the uncertain environment has a critical effect on the economy and environmental protection of each region. The considered problem can be stated as a fuzzy non-convex optimization problem with linear objective function and a set of linear and non-linear constraints. The original problem is reformulated by using linear relaxation into a fuzzy linear programming problem. The fuzzy rating of collecting point capacities and fix costs of recycling centers are modeled by triangular fuzzy numbers. The optimal solution of the reformulation model is found by using optimality concept. The proposed model is verified through an illustrative example with real-life data. The obtained results represent an input for future research which should include a good benchmark base for tested reverse logistic chains and their continuous improvement. [Projekat Ministarstva nauke Republike Srbije, br. 035033: Sustainable development technology and equipment for the recycling of motor vehicles

  6. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)


    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  7. General mirror pairs for gauged linear sigma models

    Aspinwall, Paul S.; Plesser, M. Ronen


    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  8. The Linearity of Optical Tomography: Sensor Model and Experimental Verification

    Siti Zarina MOHD. MUJI


    Full Text Available The aim of this paper is to show the linearization of optical sensor. Linearity of the sensor response is a must in optical tomography application, which affects the tomogram result. Two types of testing are used namely, testing using voltage parameter and testing with time unit parameter. For the former, the testing is by measuring the voltage when the obstacle is placed between transmitter and receiver. The obstacle diameters are between 0.5 until 3 mm. The latter is also the same testing but the obstacle is bigger than the former which is 59.24 mm and the testing purpose is to measure the time unit spend for the ball when it cut the area of sensing circuit. Both results show a linear relation that proves the optical sensors is suitable for process tomography application.

  9. A Graphical User Interface to Generalized Linear Models in MATLAB

    Peter Dunn


    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  10. MAGDM linear-programming models with distinct uncertain preference structures.

    Xu, Zeshui S; Chen, Jian


    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  11. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Beardsell, Alec; Collier, William; Han, Tao


    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  12. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric


    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  13. The magnitude of linear dichroism of biological tissues as a result of cancer changes

    Bojchuk, T. M.; Yermolenko, S. B.; Fedonyuk, L. Y.; Petryshen, O. I.; Guminetsky, S. G.; Prydij, O. G.


    The results of studies of linear dichroism values of different types of biological tissues (human prostate, esophageal epithelial human muscle tissue in rats) both healthy and infected tumor at different stages of development are shown here. The significant differences in magnitude of linear dichroism and its spectral dependence in the spectral range λ = 330 - 750 nm both among the objects of study, and between biotissues: healthy (or affected by benign tumors) and cancer patients are established. It is researched that in all cases in biological tissues (prostate gland, esophagus, human muscle tissue in rats) with cancer the linear dichroism arises, the value of which depends on the type of tissue and time of the tumor process. As for healthy tissues linear dichroism is absent, the results may have diagnostic value for detecting and assessing the degree of development of cancer.

  14. A non-linear model of economic production processes

    Ponzi, A.; Yasutomi, A.; Kaneko, K.


    We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.

  15. Sampled-data models for linear and nonlinear systems

    Yuz, Juan I


    Sampled-data Models for Linear and Nonlinear Systems provides a fresh new look at a subject with which many researchers may think themselves familiar. Rather than emphasising the differences between sampled-data and continuous-time systems, the authors proceed from the premise that, with modern sampling rates being as high as they are, it is becoming more appropriate to emphasise connections and similarities. The text is driven by three motives: ·      the ubiquity of computers in modern control and signal-processing equipment means that sampling of systems that really evolve continuously is unavoidable; ·      although superficially straightforward, sampling can easily produce erroneous results when not treated properly; and ·      the need for a thorough understanding of many aspects of sampling among researchers and engineers dealing with applications to which they are central. The authors tackle many misconceptions which, although appearing reasonable at first sight, are in fact either p...

  16. Linear multivariate evaluation models for spatial perception of soundscape.

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu


    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  17. Mixed models, linear dependency, and identification in age-period-cohort models.

    O'Brien, Robert M


    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Estimation and Inference for Very Large Linear Mixed Effects Models

    Gao, K.; Owen, A. B.


    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  19. Using Quartile-Quartile Lines as Linear Models

    Gordon, Sheldon P.


    This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…




    For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability

  1. Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding

    A. Yamileva


    Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.

  2. Modeling winter precipitation over the Juneau Icefield, Alaska, using a linear model of orographic precipitation

    Roth, Aurora; Hock, Regine; Schuler, Thomas V.; Bieniek, Peter A.; Pelto, Mauri; Aschwanden, Andy


    Assessing and modeling precipitation in mountainous areas remains a major challenge in glacier mass balance modeling. Observations are typically scarce and reanalysis data and similar climate products are too coarse to accurately capture orographic effects. Here we use the linear theory of orographic precipitation model (LT model) to downscale winter precipitation from a regional climate model over the Juneau Icefield, one of the largest ice masses in North America (>4000 km2), for the period 1979-2013. The LT model is physically-based yet computationally efficient, combining airflow dynamics and simple cloud microphysics. The resulting 1 km resolution precipitation fields show substantially reduced precipitation on the northeastern portion of the icefield compared to the southwestern side, a pattern that is not well captured in the coarse resolution (20 km) WRF data. Net snow accumulation derived from the LT model precipitation agrees well with point observations across the icefield. To investigate the robustness of the LT model results, we perform a series of sensitivity experiments varying hydrometeor fall speeds, the horizontal resolution of the underlying grid, and the source of the meteorological forcing data. The resulting normalized spatial precipitation pattern is similar for all sensitivity experiments, but local precipitation amounts vary strongly, with greatest sensitivity to variations in snow fall speed. Results indicate that the LT model has great potential to provide improved spatial patterns of winter precipitation for glacier mass balance modeling purposes in complex terrain, but ground observations are necessary to constrain model parameters to match total amounts.

  3. Modelling of Rotational Capacity in Reinforced Linear Elements

    Hestbech, Lars; Hagsten, Lars German; Fisker, Jakob


    on the rotational capacity of the plastic hinges. The documentation of ductility can be a difficult task as modelling of rotational capacity in plastic hinges of frames is not fully developed. On the basis of the Theory of Plasticity a model is developed to determine rotational capacity in plastic hinges in linear......The Capacity Design Method forms the basis of several seismic design codes. This design philosophy allows plastic deformations in order to decrease seismic demands in structures. However, these plastic deformations must be localized in certain zones where ductility requirements can be documented...... reinforced concrete elements. The model is taking several important parameters into account. Empirical values is avoided which is considered an advantage compared to previous models. Furthermore, the model includes force variations in the reinforcement due to moment distributions and shear as well...

  4. Network Traffic Monitoring Using Poisson Dynamic Linear Models

    Merl, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    In this article, we discuss an approach for network forensics using a class of nonstationary Poisson processes with embedded dynamic linear models. As a modeling strategy, the Poisson DLM (PoDLM) provides a very flexible framework for specifying structured effects that may influence the evolution of the underlying Poisson rate parameter, including diurnal and weekly usage patterns. We develop a novel particle learning algorithm for online smoothing and prediction for the PoDLM, and demonstrate the suitability of the approach to real-time deployment settings via a new application to computer network traffic monitoring.

  5. On the chiral phase transition in the linear sigma model

    Tran Huu Phat; Nguyen Tuan Anh; Le Viet Hoa


    The Cornwall- Jackiw-Tomboulis (CJT) effective action for composite operators at finite temperature is used to investigate the chiral phase transition within the framework of the linear sigma model as the low-energy effective model of quantum chromodynamics (QCD). A new renormalization prescription for the CJT effective action in the Hartree-Fock (HF) approximation is proposed. A numerical study, which incorporates both thermal and quantum effect, shows that in this approximation the phase transition is of first order. However, taking into account the higher-loop diagrams contribution the order of phase transition is unchanged. (author)

  6. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    Sanz, Luis; Alonso, Juan Antonio


    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  7. Study of Piezoelectric Vibration Energy Harvester with non-linear conditioning circuit using an integrated model

    Manzoor, Ali; Rafique, Sajid; Usman Iftikhar, Muhammad; Mahmood Ul Hassan, Khalid; Nasir, Ali


    Piezoelectric vibration energy harvester (PVEH) consists of a cantilever bimorph with piezoelectric layers pasted on its top and bottom, which can harvest power from vibrations and feed to low power wireless sensor nodes through some power conditioning circuit. In this paper, a non-linear conditioning circuit, consisting of a full-bridge rectifier followed by a buck-boost converter, is employed to investigate the issues of electrical side of the energy harvesting system. An integrated mathematical model of complete electromechanical system has been developed. Previously, researchers have studied PVEH with sophisticated piezo-beam models but employed simplistic linear circuits, such as resistor, as electrical load. In contrast, other researchers have worked on more complex non-linear circuits but with over-simplified piezo-beam models. Such models neglect different aspects of the system which result from complex interactions of its electrical and mechanical subsystems. In this work, authors have integrated the distributed parameter-based model of piezo-beam presented in literature with a real world non-linear electrical load. Then, the developed integrated model is employed to analyse the stability of complete energy harvesting system. This work provides a more realistic and useful electromechanical model having realistic non-linear electrical load unlike the simplistic linear circuit elements employed by many researchers.

  8. The Overgeneralization of Linear Models among University Students' Mathematical Productions: A Long-Term Study

    Esteley, Cristina B.; Villarreal, Monica E.; Alagia, Humberto R.


    Over the past several years, we have been exploring and researching a phenomenon that occurs among undergraduate students that we called extension of linear models to non-linear contexts or overgeneralization of linear models. This phenomenon appears when some students use linear representations in situations that are non-linear. In a first phase,…

  9. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Long, Kevin Nicholas; Brown, Judith Alice


    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  10. Effect Displays in R for Generalised Linear Models

    John Fox


    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  11. Global numerical modeling of magnetized plasma in a linear device

    Magnussen, Michael Løiten

    Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion dev...... with simulations performed at different ionization levels, using a simple model for plasma interaction with neutrals. It is found that the steady state and the saturated state of the system bifurcates when the neutral interaction dominates the electron-ion collisions.......Understanding the turbulent transport in the plasma-edge in fusion devices is of utmost importance in order to make precise predictions for future fusion devices. The plasma turbulence observed in linear devices shares many important features with the turbulence observed in the edge of fusion...... devices, and are easier to diagnose due to lower temperatures and a better access to the plasma. In order to gain greater insight into this complex turbulent behavior, numerical simulations of plasma in a linear device are performed in this thesis. Here, a three-dimensional drift-fluid model is derived...

  12. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Murillo, M; Sánchez, G; Giovanini, L


    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. The implications of non-linear nitrogen chemistry in the HARM Model for use by the Environment Agency

    Metcalfe, S.E.; Whyatt, J.D.


    The findings of research into the linearity of the oxidised nitrogen chemistry in the Hull Acid Rain Model are presented, The background and structure of the HARM model are presented with modelling results, conclusions and recommendations. (author)

  14. Symmetry conservation in the linear chiral soliton model

    Goeke, K.


    The linear chiral soliton model with quark fields and elementary pion- and sigma-fields is solved in order to describe static properties of the nucleon and the delta resonance. To this end a Fock-state of the system is constructed consisting out of three valence quarks in a first orbit with a generalized hedgehog spin-flavour configuration. Coherent states are used to provide a quantum description for the mesonic parts of the total wave function. The corresponding classical pion field also exhibit a generalized hedgehog structure. In a pure mean field approximation the variation of the total energy results in the ordinary hedgehog form. In a quantized approach the generalized hedgehog-baryon is projected onto states with good spin and isospin and then noticeable deviations from the simple hedgehog form, if the relevant degrees of freedom of the wave function are varied after the projection. Various nucleon properties are calculated. These include proton and neutron charge radii, and the magnetic moment of the proton for which good agreement with experiment is obtained. The absolute value of the neutron magnetic moment comes out too large, similarly as the axial vector coupling constant and the pion-nucleon-nucleon coupling constant.To the generalization of the hedgehog the Goldberger-Treiman relation and a corresponding virial theorem are fulfilled. Variation of the quark-meson coupling parameter g and the sigma mass m σ shows that the g A is always at least 40 % too large compared to experiment. Hence it is concluded that either the inclusion of the polarization of the Dirac sea and/or further mesons with may be vector character or the consideration of intrinsic deformation is necessary. The concepts and results of the projections are compared with the semiclassical collective quantization method. 6 tabs., 14 figs., 43 refs

  15. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    Song, Xiaolei


    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen\\'s parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  16. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S


    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were

  17. Linearized vector radiative transfer model MCC++ for a spherical atmosphere

    Postylyakov, O.V.


    Application of radiative transfer models has shown that optical remote sensing requires extra characteristics of radiance field in addition to the radiance intensity itself. Simulation of spectral measurements, analysis of retrieval errors and development of retrieval algorithms are in need of derivatives of radiance with respect to atmospheric constituents under investigation. The presented vector spherical radiative transfer model MCC++ was linearized, which allows the calculation of derivatives of all elements of the Stokes vector with respect to the volume absorption coefficient simultaneously with radiance calculation. The model MCC++ employs Monte Carlo algorithm for radiative transfer simulation and takes into account aerosol and molecular scattering, gas and aerosol absorption, and Lambertian surface albedo. The model treats a spherically symmetrical atmosphere. Relation of the estimated derivatives with other forms of radiance derivatives: the weighting functions used in gas retrieval and the air mass factors used in the DOAS retrieval algorithms, is obtained. Validation of the model against other radiative models is overviewed. The computing time of the intensity for the MCC++ model is about that for radiative models treating sphericity of the atmosphere approximately and is significantly shorter than that for the full spherical models used in the comparisons. The simultaneous calculation of all derivatives (i.e. with respect to absorption in all model atmosphere layers) and the intensity is only 1.2-2 times longer than the calculation of the intensity only

  18. Exactly soluble two-state quantum models with linear couplings

    Torosov, B T; Vitanov, N V


    A class of exact analytic solutions of the time-dependent Schroedinger equation is presented for a two-state quantum system coherently driven by a nonresonant external field. The coupling is a linear function of time with a finite duration and the detuning is constant. Four special models are considered in detail, namely the shark, double-shark, tent and zigzag models. The exact solution is derived by rotation of the Landau-Zener propagator at an angle of π/4 and is expressed in terms of Weber's parabolic cylinder function. Approximations for the transition probabilities are derived for all four models by using the asymptotics of the Weber function; these approximations demonstrate various effects of physical interest for each model

  19. Linear models for multivariate, time series, and spatial data

    Christensen, Ronald


    This is a companion volume to Plane Answers to Complex Questions: The Theory 0/ Linear Models. It consists of six additional chapters written in the same spirit as the last six chapters of the earlier book. Brief introductions are given to topics related to linear model theory. No attempt is made to give a comprehensive treatment of the topics. Such an effort would be futile. Each chapter is on a topic so broad that an in depth discussion would require a book-Iength treatment. People need to impose structure on the world in order to understand it. There is a limit to the number of unrelated facts that anyone can remem­ ber. If ideas can be put within a broad, sophisticatedly simple structure, not only are they easier to remember but often new insights become avail­ able. In fact, sophisticatedly simple models of the world may be the only ones that work. I have often heard Arnold Zellner say that, to the best of his knowledge, this is true in econometrics. The process of modeling is fundamental to understand...

  20. Linear mixed models a practical guide using statistical software

    West, Brady T; Galecki, Andrzej T


    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  1. Generation companies decision-making modeling by linear control theory

    Gutierrez-Alcaraz, G.; Sheble, Gerald B.


    This paper proposes four decision-making procedures to be employed by electric generating companies as part of their bidding strategies when competing in an oligopolistic market: naive, forward, adaptive, and moving average expectations. Decision-making is formulated in a dynamic framework by using linear control theory. The results reveal that interactions among all GENCOs affect market dynamics. Several numerical examples are reported, and conclusions are presented. (author)

  2. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing


    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  3. Available pressure amplitude of linear compressor based on phasor triangle model

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.


    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  4. Spatial generalised linear mixed models based on distances.

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E


    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  5. Accelerating transient simulation of linear reduced order models.

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad


    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  6. Behavioral modeling of the dominant dynamics in input-output transfer of linear(ized) circuits

    Beelen, T.G.J.; Maten, ter E.J.W.; Sihaloho, H.J.; Eijndhoven, van S.J.L.


    We present a powerful procedure for determining both the dominant dynamics of the inputoutput transfer and the corresponding most influential circuit parameters of a linear(ized) circuit. The procedure consists of several steps in which a specific (sub)problem is solved and its solution is used in

  7. su(1,2) Algebraic Structure of XYZ Antiferromagnetic Model in Linear Spin-Wave Frame

    Jin Shuo; Xie Binghao; Yu Zhaoxian; Hou Jingmin


    The XYZ antiferromagnetic model in linear spin-wave frame is shown explicitly to have an su(1,2) algebraic structure: the Hamiltonian can be written as a linear function of the su(1,2) algebra generators. Based on it, the energy eigenvalues are obtained by making use of the similar transformations, and the algebraic diagonalization method is investigated. Some numerical solutions are given, and the results indicate that only one group solution could be accepted in physics

  8. A linear model for flow over complex terrain

    Frank, H P [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)


    A linear flow model similar to WA{sup s}P or LINCOM has been developed. Major differences are an isentropic temperature equation which allows internal gravity waves, and vertical advection of the shear of the mean flow. The importance of these effects are illustrated by examples. Resource maps are calculated from a distribution of geostrophic winds and stratification for Pyhaetunturi Fell in northern Finland and Acqua Spruzza in Italy. Stratification becomes important if the inverse Froude number formulated with the width of the hill becomes of order one or greater. (au) EU-JOULE-3. 16 refs.

  9. Linear-quadratic model predictions for tumor control probability

    Yaes, R.J.


    Sigmoid dose-response curves for tumor control are calculated from the linear-quadratic model parameters α and Β, obtained from human epidermoid carcinoma cell lines, and are much steeper than the clinical dose-response curves for head and neck cancers. One possible explanation is the presence of small radiation-resistant clones arising from mutations in an initially homogeneous tumor. Using the mutation theory of Delbruck and Luria and of Goldie and Coldman, the authors discuss the implications of such radiation-resistant clones for clinical radiation therapy

  10. Inventory model using bayesian dynamic linear model for demand forecasting

    Marisol Valencia-Cárdenas


    Full Text Available An important factor of manufacturing process is the inventory management of terminated product. Constantly, industry is looking for better alternatives to establish an adequate plan of production and stored quantities, with optimal cost, getting quantities in a time horizon, which permits to define resources and logistics with anticipation, needed to distribute products on time. Total absence of historical data, required by many statistical models to forecast, demands the search for other kind of accurate techniques. This work presents an alternative that not only permits to forecast, in an adjusted way, but also, to provide optimal quantities to produce and store with an optimal cost, using Bayesian statistics. The proposal is illustrated with real data. Palabras clave: estadística bayesiana, optimización, modelo de inventarios, modelo lineal dinámico bayesiano. Keywords: Bayesian statistics, opti

  11. Non-linear nuclear engineering models as genetic programming application

    Domingos, Roberto P.; Schirru, Roberto; Martinez, Aquilino S.


    This work presents a Genetic Programming paradigm and a nuclear application. A field of Artificial Intelligence, based on the concepts of Species Evolution and Natural Selection, can be understood as a self-programming process where the computer is the main agent responsible for the discovery of a program able to solve a given problem. In the present case, the problem was to find a mathematical expression in symbolic form, able to express the existent relation between equivalent ratio of a fuel cell, the enrichment of fuel elements and the multiplication factor. Such expression would avoid repeatedly reactor physics codes execution for core optimization. The results were compared with those obtained by different techniques such as Neural Networks and Linear Multiple Regression. Genetic Programming has shown to present a performance as good as, and under some features superior to Neural Network and Linear Multiple Regression. (author). 10 refs., 8 figs., 1 tabs

  12. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter


    Positioning solutions in infrastructure-based wireless networks generally operate by exploiting the channel information of the links between the Wireless Devices and fixed networking Access Points. The major challenge of such solutions is the modeling of both the noise properties of the channel...... measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...... a segment-wise linear model, called the Drifting Points model. The model results in an increased performance when compared with traditional solutions....

  13. New results for exponential synchronization of linearly coupled ordinary differential systems

    Tong Ping; Chen Shi-Hua


    This paper investigates the exponential synchronization of linearly coupled ordinary differential systems. The intrinsic nonlinear dynamics may not satisfy the QUAD condition or weak-QUAD condition. First, it gives a new method to analyze the exponential synchronization of the systems. Second, two theorems and their corollaries are proposed for the local or global exponential synchronization of the coupled systems. Finally, an application to the linearly coupled Hopfield neural networks and several simulations are provided for verifying the effectiveness of the theoretical results. (paper)

  14. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding


    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  15. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi


    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  16. Non-Linear Slosh Damping Model Development and Validation

    Yang, H. Q.; West, Jeff


    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  17. A quantitative analysis of instabilities in the linear chiral sigma model

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da


    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  18. Linear collider signal of anomaly mediated supersymmetry breaking model

    Ghosh Dilip Kumar; Kundu, Anirban; Roy, Probir; Roy, Sourov


    Though the minimal model of anomaly mediated supersymmetry breaking has been significantly constrained by recent experimental and theoretical work, there are still allowed regions of the parameter space for moderate to large values of tan β. We show that these regions will be comprehensively probed in a √s = 1 TeV e + e - linear collider. Diagnostic signals to this end are studied by zeroing in on a unique and distinct feature of a large class of models in this genre: a neutral winolike Lightest Supersymmetric Particle closely degenerate in mass with a winolike chargino. The pair production processes e + e - → e tilde L ± e tilde L ± , e tilde R ± e tilde R ± , e tilde L ± e tilde R ± , ν tilde anti ν tilde, χ tilde 1 0 χ tilde 2 0 , χ tilde 2 0 χ tilde 2 0 are all considered at √s = 1 TeV corresponding to the proposed TESLA linear collider in two natural categories of mass ordering in the sparticle spectra. The signals analysed comprise multiple combinations of fast charged leptons (any of which can act as the trigger) plus displaced vertices X D (any of which can be identified by a heavy ionizing track terminating in the detector) and/or associated soft pions with characteristic momentum distributions. (author)

  19. On Active Surge Control of Compression Systems via Characteristic Linearization and Model Nonlinearity Cancellation

    Yohannes S.M. Simamora


    Full Text Available A simple approach of active surge control of compression systems is presented. Specifically, nonlinear components of the pressure ratio and rotating speed states of the Moore-Greitzer model are transferred into the input vectors. Subsequently, the compressor characteristic is linearized into two modes, which describe the stable region and the unstable region respectively. As a result, the system’s state and input matrices both appear linear, to which linear realization and analysis are applicable. A linear quadratic regulator plus integrator is then chosen as closed-loop controller. By simulation it was shown that the modified model and characteristics can describe surge behavior, while the closed-loop controller can stabilize the system in the unstable operating region. The last-mentioned was achieved when massflow was 5.38 per cent less than the surge point.

  20. Locally supersymmetric D=3 non-linear sigma models

    Wit, B. de; Tollsten, A.K.; Nicolai, H.


    We study non-linear sigma models with N local supersymmetries in three space-time dimensions. For N=1 and 2 the target space of these models is riemannian or Kaehler, respectively. All N>2 theories are associated with Einstein spaces. For N=3 the target space is quaternionic, while for N=4 it generally decomposes, into two separate quaternionic spaces, associated with inequivalent supermultiplets. For N=5, 6, 8 there is a unique (symmetric) space for any given number of supermultiplets. Beyond that there are only theories based on a single supermultiplet for N=9, 10, 12 and 16, associated with coset spaces with the exceptional isometry groups F 4(-20) , E 6(-14) , E 7(-5) and E 8(+8) , respectively. For N=3 and N ≥ 5 the D=2 theories obtained by dimensional reduction are two-loop finite. (orig.)

  1. Explicit estimating equations for semiparametric generalized linear latent variable models

    Ma, Yanyuan


    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  2. Planktonic food webs revisited: Reanalysis of results from the linear inverse approach

    Hlaili, Asma Sakka; Niquil, Nathalie; Legendre, Louis


    Identification of the trophic pathway that dominates a given planktonic assemblage is generally based on the distribution of biomasses among food-web compartments, or better, the flows of materials or energy among compartments. These flows are obtained by field observations and a posteriori analyses, including the linear inverse approach. In the present study, we re-analysed carbon flows obtained by inverse analysis at 32 stations in the global ocean and one large lake. Our results do not support two "classical" views of plankton ecology, i.e. that the herbivorous food web is dominated by mesozooplankton grazing on large phytoplankton, and the microbial food web is based on microzooplankton significantly consuming bacteria; our results suggest instead that phytoplankton are generally grazed by microzooplankton, of which they are the main food source. Furthermore, we identified the "phyto-microbial food web", where microzooplankton largely feed on phytoplankton, in addition to the already known "poly-microbial food web", where microzooplankton consume more or less equally various types of food. These unexpected results led to a (re)definition of the conceptual models corresponding to the four trophic pathways we found to exist in plankton, i.e. the herbivorous, multivorous, and two types of microbial food web. We illustrated the conceptual trophic pathways using carbon flows that were actually observed at representative stations. The latter can be calibrated to correspond to any field situation. Our study also provides researchers and managers with operational criteria for identifying the dominant trophic pathway in a planktonic assemblage, these criteria being based on the values of two carbon ratios that could be calculated from flow values that are relatively easy to estimate in the field.

  3. Thermal radiation analysis for small satellites with single-node model using techniques of equivalent linearization

    Anh, N.D.; Hieu, N.N.; Chung, P.N.; Anh, N.T.


    Highlights: • Linearization criteria are presented for a single-node model of satellite thermal. • A nonlinear algebraic system for linearization coefficients is obtained. • The temperature evolutions obtained from different methods are explored. • The temperature mean and amplitudes versus the heat capacity are discussed. • The dual criterion approach yields smaller errors than other approximate methods. - Abstract: In this paper, the method of equivalent linearization is extended to the thermal analysis of satellite using both conventional and dual criteria of linearization. These criteria are applied to a differential nonlinear equation of single-node model of the heat transfer of a small satellite in the Low Earth Orbit. A system of nonlinear algebraic equations for linearization coefficients is obtained in the closed form and then solved by the iteration method. The temperature evolution, average values and amplitudes versus the heat capacity obtained by various approaches including Runge–Kutta algorithm, conventional and dual criteria of equivalent linearization, and Grande's approach are compared together. Numerical results reveal that temperature responses obtained from the method of linearization and Grande's approach are quite close to those obtained from the Runge–Kutta method. The dual criterion yields smaller errors than those of the remaining methods when the nonlinearity of the system increases, namely, when the heat capacity varies in the range [1.0, 3.0] × 10 4  J K −1 .

  4. Nonlinear aeroacoustic characterization of Helmholtz resonators with a local-linear neuro-fuzzy network model

    Förner, K.; Polifke, W.


    The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.

  5. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Strandén, I; Lidauer, M


    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  6. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Yunbei Ma


    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  7. Linear mixed-effects modeling approach to FMRI group analysis.

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W


    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  8. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Li, Pu; Vu, Quoc Dong


    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  9. A Linearized Large Signal Model of an LCL-Type Resonant Converter

    Hong-Yu Li


    Full Text Available In this work, an LCL-type resonant dc/dc converter with a capacitive output filter is modeled in two stages. In the first high-frequency ac stage, all ac signals are decomposed into two orthogonal vectors in a synchronous rotating d–q frame using multi-frequency modeling. In the dc stage, all dc quantities are represented by their average values with average state-space modeling. A nonlinear two-stage model is then created by means of a non-linear link. By aligning the transformer voltage on the d-axis, the nonlinear link can be eliminated, and the whole converter can be modeled by a single set of linear state-space equations. Furthermore, a feedback control scheme can be formed according to the steady-state solutions. Simulation and experimental results have proven that the resulted model is good for fast simulation and state variable estimation.

  10. Direction of Effects in Multiple Linear Regression Models.

    Wiedermann, Wolfgang; von Eye, Alexander


    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  11. Fourth standard model family neutrino at future linear colliders

    Ciftci, A.K.; Ciftci, R.; Sultansoy, S.


    It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac (ν 4 ) and Majorana (N 1 ) neutrinos at future linear colliders with √(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e + e - →ν 4 ν 4 (N 1 N 1 ) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels (ν 4 (N 1 )→μ ± W ± ) provide cleanest signature at e + e - colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at √(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures

  12. A note on probabilistic models over strings: the linear algebra approach.

    Bouchard-Côté, Alexandre


    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  13. A simple non-linear model of immune response

    Gutnikov, Sergei; Melnikov, Yuri


    It is still unknown why the adaptive immune response in the natural immune system based on clonal proliferation of lymphocytes requires interaction of at least two different cell types with the same antigen. We present a simple mathematical model illustrating that the system with separate types of cells for antigen recognition and patogen destruction provides more robust adaptive immunity than the system where just one cell type is responsible for both recognition and destruction. The model is over-simplified as we did not have an intention of describing the natural immune system. However, our model provides a tool for testing the proposed approach through qualitative analysis of the immune system dynamics in order to construct more sophisticated models of the immune systems that exist in the living nature. It also opens a possibility to explore specific features of highly non-linear dynamics in nature-inspired computational paradigms like artificial immune systems and immunocomputing . We expect this paper to be of interest not only for mathematicians but also for biologists; therefore we made effort to explain mathematics in sufficient detail for readers without professional mathematical background

  14. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J


    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  15. The Non-Linear Relationship between BMI and Health Care Costs and the Resulting Cost Fraction Attributable to Obesity.

    Laxy, Michael; Stark, Renée; Peters, Annette; Hauner, Hans; Holle, Rolf; Teuner, Christina M


    This study aims to analyse the non-linear relationship between Body Mass Index (BMI) and direct health care costs, and to quantify the resulting cost fraction attributable to obesity in Germany. Five cross-sectional surveys of cohort studies in southern Germany were pooled, resulting in data of 6757 individuals (31-96 years old). Self-reported information on health care utilisation was used to estimate direct health care costs for the year 2011. The relationship between measured BMI and annual costs was analysed using generalised additive models, and the cost fraction attributable to obesity was calculated. We found a non-linear association of BMI and health care costs with a continuously increasing slope for increasing BMI without any clear threshold. Under the consideration of the non-linear BMI-cost relationship, a shift in the BMI distribution so that the BMI of each individual is lowered by one point is associated with a 2.1% reduction of mean direct costs in the population. If obesity was eliminated, and the BMI of all obese individuals were lowered to 29.9 kg/m², this would reduce the mean direct costs by 4.0% in the population. Results show a non-linear relationship between BMI and health care costs, with very high costs for a few individuals with high BMI. This indicates that population-based interventions in combination with selective measures for very obese individuals might be the preferred strategy.

  16. Use of multivariate extensions of generalized linear models in the analysis of data from clinical trials

    ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose


    In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...

  17. Some computer simulations based on the linear relative risk model

    Gilbert, E.S.


    This report presents the results of computer simulations designed to evaluate and compare the performance of the likelihood ratio statistic and the score statistic for making inferences about the linear relative risk mode. The work was motivated by data on workers exposed to low doses of radiation, and the report includes illustration of several procedures for obtaining confidence limits for the excess relative risk coefficient based on data from three studies of nuclear workers. The computer simulations indicate that with small sample sizes and highly skewed dose distributions, asymptotic approximations to the score statistic or to the likelihood ratio statistic may not be adequate. For testing the null hypothesis that the excess relative risk is equal to zero, the asymptotic approximation to the likelihood ratio statistic was adequate, but use of the asymptotic approximation to the score statistic rejected the null hypothesis too often. Frequently the likelihood was maximized at the lower constraint, and when this occurred, the asymptotic approximations for the likelihood ratio and score statistics did not perform well in obtaining upper confidence limits. The score statistic and likelihood ratio statistics were found to perform comparably in terms of power and width of the confidence limits. It is recommended that with modest sample sizes, confidence limits be obtained using computer simulations based on the score statistic. Although nuclear worker studies are emphasized in this report, its results are relevant for any study investigating linear dose-response functions with highly skewed exposure distributions. 22 refs., 14 tabs

  18. A Non-linear Stochastic Model for an Office Building with Air Infiltration

    Thavlov, Anders; Madsen, Henrik


    This paper presents a non-linear heat dynamic model for a multi-room office building with air infiltration. Several linear and non-linear models, with and without air infiltration, are investigated and compared. The models are formulated using stochastic differential equations and the model...

  19. Non-linear models for the detection of impaired cerebral blood flow autoregulation.

    Chacón, Max; Jara, José Luis; Miranda, Rodrigo; Katsogridakis, Emmanuel; Panerai, Ronney B


    The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model's derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired.

  20. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Sorana D. BOLBOACA


    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  1. Modeling and analysis of linearized wheel-rail contact dynamics

    Soomro, Z.


    The dynamics of the railway vehicles are nonlinear and depend upon several factors including vehicle speed, normal load and adhesion level. The presence of contaminants on the railway track makes them unpredictable too. Therefore in order to develop an effective control strategy it is important to analyze the effect of each factor on dynamic response thoroughly. In this paper a linearized model of a railway wheel-set is developed and is later analyzed by varying the speed and adhesion level by keeping the normal load constant. A wheel-set is the wheel-axle assembly of a railroad car. Patch contact is the study of the deformation of solids that touch each other at one or more points. (author)

  2. Non Abelian T-duality in Gauged Linear Sigma Models

    Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; Santos-Silva, Roberto


    Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.

  3. A comparison of linear interpolation models for iterative CT reconstruction.

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric


    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  4. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    Andrea Dorila Cárcamo


    Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills.

  5. Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.

    Ying, Xiaoguo; Lin, Han; Hui, Guohua


    Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.

  6. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo


    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  7. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    Metscher, Jonathan F.; Lewandowski, Edward J.


    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  8. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric


    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  9. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    Sorribas Albert


    Full Text Available Abstract Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  10. A Detailed Analytical Study of Non-Linear Semiconductor Device Modelling

    Umesh Kumar


    junction diode have been developed. The results of computer simulated examples have been presented in each case. The non-linear lumped model for Gunn is a unified model as it describes the diffusion effects as the-domain traves from cathode to anode. An additional feature of this model is that it describes the domain extinction and nucleation phenomena in Gunn dioder with the help of a simple timing circuit. The non-linear lumped model for SCR is general and is valid under any mode of operation in any circuit environment. The memristive circuit model for p-n junction diodes is capable of simulating realistically the diode’s dynamic behavior under reverse, forward and sinusiodal operating modes. The model uses memristor, the charge-controlled resistor to mimic various second-order effects due to conductivity modulation. It is found that both storage time and fall time of the diode can be accurately predicted.

  11. Mathematical Modelling in Engineering: An Alternative Way to Teach Linear Algebra

    Domínguez-García, S.; García-Planas, M. I.; Taberna, J.


    Technological advances require that basic science courses for engineering, including Linear Algebra, emphasize the development of mathematical strengths associated with modelling and interpretation of results, which are not limited only to calculus abilities. Based on this consideration, we have proposed a project-based learning, giving a dynamic…

  12. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    Ramo, Nicole L.; Puttlitz, Christian M.


    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  13. Wavelet-linear genetic programming: A new approach for modeling monthly streamflow

    Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur


    The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.

  14. On the analysis of clonogenic survival data: Statistical alternatives to the linear-quadratic model

    Unkel, Steffen; Belka, Claus; Lauber, Kirsten


    The most frequently used method to quantitatively describe the response to ionizing irradiation in terms of clonogenic survival is the linear-quadratic (LQ) model. In the LQ model, the logarithm of the surviving fraction is regressed linearly on the radiation dose by means of a second-degree polynomial. The ratio of the estimated parameters for the linear and quadratic term, respectively, represents the dose at which both terms have the same weight in the abrogation of clonogenic survival. This ratio is known as the α/β ratio. However, there are plausible scenarios in which the α/β ratio fails to sufficiently reflect differences between dose-response curves, for example when curves with similar α/β ratio but different overall steepness are being compared. In such situations, the interpretation of the LQ model is severely limited. Colony formation assays were performed in order to measure the clonogenic survival of nine human pancreatic cancer cell lines and immortalized human pancreatic ductal epithelial cells upon irradiation at 0-10 Gy. The resulting dataset was subjected to LQ regression and non-linear log-logistic regression. Dimensionality reduction of the data was performed by cluster analysis and principal component analysis. Both the LQ model and the non-linear log-logistic regression model resulted in accurate approximations of the observed dose-response relationships in the dataset of clonogenic survival. However, in contrast to the LQ model the non-linear regression model allowed the discrimination of curves with different overall steepness but similar α/β ratio and revealed an improved goodness-of-fit. Additionally, the estimated parameters in the non-linear model exhibit a more direct interpretation than the α/β ratio. Dimensionality reduction of clonogenic survival data by means of cluster analysis was shown to be a useful tool for classifying radioresistant and sensitive cell lines. More quantitatively, principal component analysis allowed

  15. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Suprayitno, Hitapriya; Ratnasari, Vita


    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  16. Estimation of non-linear continuous time models for the heat exchange dynamics of building integrated photovoltaic modules

    Jimenez, M.J.; Madsen, Henrik; Bloem, J.J.


    This paper focuses on a method for linear or non-linear continuous time modelling of physical systems using discrete time data. This approach facilitates a more appropriate modelling of more realistic non-linear systems. Particularly concerning advanced building components, convective and radiati...... that a description of the non-linear heat transfer is essential. The resulting model is a non-linear first order stochastic differential equation for the heat transfer of the PV component....... heat interchanges are non-linear effects and represent significant contributions in a variety of components such as photovoltaic integrated facades or roofs and those using these effects as passive cooling strategies, etc. Since models are approximations of the physical system and data is encumbered...

  17. Impact of using linear optimization models in dose planning for HDR brachytherapy

    Holm, Aasa; Larsson, Torbjoern; Carlsson Tedgren, Aasa


    Purpose: Dose plans generated with optimization models hitherto used in high-dose-rate (HDR) brachytherapy have shown a tendency to yield longer dwell times than manually optimized plans. Concern has been raised for the corresponding undesired hot spots, and various methods to mitigate these have been developed. The hypotheses upon this work is based are (a) that one cause for the long dwell times is the use of objective functions comprising simple linear penalties and (b) that alternative penalties, as these are piecewise linear, would lead to reduced length of individual dwell times. Methods: The characteristics of the linear penalties and the piecewise linear penalties are analyzed mathematically. Experimental comparisons between the two types of penalties are carried out retrospectively for a set of prostate cancer patients. Results: When the two types of penalties are compared, significant changes can be seen in the dwell times, while most dose-volume parameters do not differ significantly. On average, total dwell times were reduced by 4.2%, with a reduction of maximum dwell times by 25%, when the alternative penalties were used. Conclusions: The use of linear penalties in optimization models for HDR brachytherapy is one cause for the undesired long dwell times that arise in mathematically optimized plans. By introducing alternative penalties, a significant reduction in dwell times can be achieved for HDR brachytherapy dose plans. Although various measures for mitigating the long dwell times are already available, the observation that linear penalties contribute to their appearance is of fundamental interest.

  18. Linear and non-linear relations between psychosocial job characteristics, subjective outcomes, and sickness absence: baseline results from SMASH

    Jonge, J. de; Reuvers, M.M.E.N.; Houtman, I.L.D.; Bongers, P.M.; Kompier, M.A.J.


    This study investigates the demand-control-support (DCS) model by (a) using a more focused measure of job control, (b) testing for interactive and nonlinear relationships, and (c) further extending the model to the prediction of an objective outcome measure (i.e., company-administrated sickness

  19. Linear models for sound from supersonic reacting mixing layers

    Chary, P. Shivakanth; Samanta, Arnab


    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  20. Linear programming model can explain respiration of fermentation products

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan


    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the “Warburg effect”. The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited. PMID:29415045

  1. Linear programming model can explain respiration of fermentation products.

    Möller, Philip; Liu, Xiaochen; Schuster, Stefan; Boley, Daniel


    Many differentiated cells rely primarily on mitochondrial oxidative phosphorylation for generating energy in the form of ATP needed for cellular metabolism. In contrast most tumor cells instead rely on aerobic glycolysis leading to lactate to about the same extent as on respiration. Warburg found that cancer cells to support oxidative phosphorylation, tend to ferment glucose or other energy source into lactate even in the presence of sufficient oxygen, which is an inefficient way to generate ATP. This effect also occurs in striated muscle cells, activated lymphocytes and microglia, endothelial cells and several mammalian cell types, a phenomenon termed the "Warburg effect". The effect is paradoxical at first glance because the ATP production rate of aerobic glycolysis is much slower than that of respiration and the energy demands are better to be met by pure oxidative phosphorylation. We tackle this question by building a minimal model including three combined reactions. The new aspect in extension to earlier models is that we take into account the possible uptake and oxidation of the fermentation products. We examine the case where the cell can allocate protein on several enzymes in a varying distribution and model this by a linear programming problem in which the objective is to maximize the ATP production rate under different combinations of constraints on enzymes. Depending on the cost of reactions and limitation of the substrates, this leads to pure respiration, pure fermentation, and a mixture of respiration and fermentation. The model predicts that fermentation products are only oxidized when glucose is scarce or its uptake is severely limited.

  2. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh


    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  3. Stochastic linear hybrid systems: Modeling, estimation, and application

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  4. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models.

    Pozo, Carlos; Marín-Sanguino, Alberto; Alves, Rui; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Sorribas, Albert


    Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  5. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine


    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  6. Interactions in Generalized Linear Models: Theoretical Issues and an Application to Personal Vote-Earning Attributes

    Tsung-han Tsai


    Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.

  7. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  8. A linear ion optics model for extraction from a plasma ion source

    Dietrich, J.


    A linear ion optics model for ion extraction from a plasma ion source is presented, based on the paraxial equations which account for lens effects, space charge and finite source ion temperature. This model is applied to three- and four-electrode extraction systems with circular apertures. The results are compared with experimental data and numerical calculations in the literature. It is shown that the improved calculations of space charge effects and lens effects allow better agreement to be obtained than in earlier linear optics models. A principal result is that the model presented here describes the dependence of the optimum perveance on the aspect ratio in a manner similar to the nonlinear optics theory. (orig.)

  9. Generalized linear models with random effects unified analysis via H-likelihood

    Lee, Youngjo; Pawitan, Yudi


    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  10. Modeling and analysis of mover gaps in tubular moving-magnet linear oscillating motors

    Xuesong LUO


    Full Text Available A tubular moving-magnet linear oscillating motor (TMMLOM has merits of high efficiency and excellent dynamic capability. To enhance the thrust performance, quasi-Halbach permanent magnet (PM arrays are arranged on its mover in the application of a linear electro-hydrostatic actuator in more electric aircraft. The arrays are assembled by several individual segments, which lead to gaps between them inevitably. To investigate the effects of the gaps on the radial magnetic flux density and the machine thrust in this paper, an analytical model is built considering both axial and radial gaps. The model is validated by finite element simulations and experimental results. Distributions of the magnetic flux are described in condition of different sizes of radial and axial gaps. Besides, the output force is also discussed in normal and end windings. Finally, the model has demonstrated that both kinds of gaps have a negative effect on the thrust, and the linear motor is more sensitive to radial ones. Keywords: Air-gap flux density, Linear motor, Mover gaps, Quasi-Halbach array, Thrust output, Tubular moving-magnet linear oscillating motor (TMMLOM

  11. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

    Rachid Darnag


    Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

  12. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Casellas, J; Bach, R


    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  13. Estimating trajectories of energy intake through childhood and adolescence using linear-spline multilevel models.

    Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D


    Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.

  14. Simultaneous Balancing and Model Reduction of Switched Linear Systems

    Monshizadeh, Nima; Trentelman, Hendrikus; Camlibel, M.K.


    In this paper, first, balanced truncation of linear systems is revisited. Then, simultaneous balancing of multiple linear systems is investigated. Necessary and sufficient conditions are introduced to identify the case where simultaneous balancing is possible. The validity of these conditions is not

  15. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Oluwaseun Egbelowo


    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  16. Genomic prediction based on data from three layer lines using non-linear regression models.

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L


    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  17. Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.

    Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W


    Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.

  18. Dynamics of edge currents in a linearly quenched Haldane model

    Mardanya, Sougata; Bhattacharya, Utso; Agarwal, Amit; Dutta, Amit


    In a finite-time quantum quench of the Haldane model, the Chern number determining the topology of the bulk remains invariant, as long as the dynamics is unitary. Nonetheless, the corresponding boundary attribute, the edge current, displays interesting dynamics. For the case of sudden and adiabatic quenches the postquench edge current is solely determined by the initial and the final Hamiltonians, respectively. However for a finite-time (τ ) linear quench in a Haldane nanoribbon, we show that the evolution of the edge current from the sudden to the adiabatic limit is not monotonic in τ and has a turning point at a characteristic time scale τ =τ0 . For small τ , the excited states lead to a huge unidirectional surge in the edge current of both edges. On the other hand, in the limit of large τ , the edge current saturates to its expected equilibrium ground-state value. This competition between the two limits lead to the observed nonmonotonic behavior. Interestingly, τ0 seems to depend only on the Semenoff mass and the Haldane flux. A similar dynamics for the edge current is also expected in other systems with topological phases.

  19. Parameter estimation and hypothesis testing in linear models

    Koch, Karl-Rudolf


    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  20. Form factors in the projected linear chiral sigma model

    Alberto, P.; Coimbra Univ.; Bochum Univ.; Ruiz Arriola, E.; Fiolhais, M.; Urbano, J.N.; Coimbra Univ.; Goeke, K.; Gruemmer, F.; Bochum Univ.


    Several nucleon form factors are computed within the framework of the linear chiral soliton model. To this end variational means and projection techniques applied to generalized hedgehog quark-boson Fock states are used. In this procedure the Goldberger-Treiman relation and a virial theorem for the pion-nucleon form factor are well fulfilled demonstrating the consistency of the treatment. Both proton and neutron charge form factors are correctly reproduced, as well as the proton magnetic one. The shapes of the neutron magnetic and of the axial form factors are good but their absolute values at the origin are too large. The slopes of all the form factors at zero momentum transfer are in good agreement with the experimental data. The pion-nucleon form factor exhibits to great extent a monopole shape with a cut-off mass of Λ=690 MeV. Electromagnetic form factors for the vertex γNΔ and the nucleon spin distribution are also evaluated and discussed. (orig.)

  1. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Wan, Fei; Mitra, Nandita


    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  2. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander


    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  3. Linear stability analysis of flow instabilities with a nodalized reduced order model in heated channel

    Paul, Subhanker; Singh, Suneet


    The prime objective of the presented work is to develop a Nodalized Reduced Order Model (NROM) to carry linear stability analysis of flow instabilities in a two-phase flow system. The model is developed by dividing the single phase and two-phase region of a uniformly heated channel into N number of nodes followed by time dependent spatial linear approximations for single phase enthalpy and two-phase quality between the consecutive nodes. Moving boundary scheme has been adopted in the model, where all the node boundaries vary with time due to the variation of boiling boundary inside the heated channel. Using a state space approach, the instability thresholds are delineated by stability maps plotted in parameter planes of phase change number (N pch ) and subcooling number (N sub ). The prime feature of the present model is that, though the model equations are simpler due to presence of linear-linear approximations for single phase enthalpy and two-phase quality, yet the results are in good agreement with the existing models (Karve [33]; Dokhane [34]) where the model equations run for several pages and experimental data (Solberg [41]). Unlike the existing ROMs, different two-phase friction factor multiplier correlations have been incorporated in the model. The applicability of various two-phase friction factor multipliers and their effects on stability behaviour have been depicted by carrying a comparative study. It is also observed that the Friedel model for friction factor calculations produces the most accurate results with respect to the available experimental data. (authors)

  4. Linear velocity fields in non-Gaussian models for large-scale structure

    Scherrer, Robert J.


    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  5. Evaluating significance in linear mixed-effects models in R.

    Luke, Steven G


    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  6. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique


    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  7. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    Martinez-Luaces, Victor


    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  8. An improved robust model predictive control for linear parameter-varying input-output models

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.


    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  9. Results from a prototype chicane-based energy spectrometer for a linear collider

    Lyapin, A. [Univ. College London (United Kingdom); London Univ., Egham (United Kingdom). Royal Holloway; Schreiber, H.J.; Viti, M. [Deutsches Electronen Synchrotron DESY, Hamburg (Germany); Deutsches Electronen Synchrotron DESY, Zeuthen (DE)] (and others)


    The International Linear Collider (ILC) and other proposed high energy e{sup +}e{sup -} machines aim to measure with unprecedented precision Standard Model quantities and new, not yet discovered phenomena. One of the main requirements for achieving this goal is a measurement of the incident beam energy with an uncertainty close to 10{sup -4}. This article presents the analysis of data from a prototype energy spectrometer commissioned in 2006-2007 in SLAC's End Station A beamline. The prototype was a 4-magnet chicane equipped with beam position monitors measuring small changes of the beam orbit through the chicane at different beam energies. A single bunch energy resolution close to 5 . 10{sup -4} was measured, which is satisfactory for most scenarios. We also report on the operational experience with the chicane-based spectrometer and suggest ways of improving its performance. (orig.)

  10. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu


    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A non-linear state space approach to model groundwater fluctuations

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.


    A non-linear state space model is developed for describing groundwater fluctuations. Non-linearity is introduced by modeling the (unobserved) degree of water saturation of the root zone. The non-linear relations are based on physical concepts describing the dependence of both the actual

  12. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.


    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  13. Half-trek criterion for generic identifiability of linear structural equation models

    Foygel, R.; Draisma, J.; Drton, M.


    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  14. On-line validation of linear process models using generalized likelihood ratios

    Tylee, J.L.


    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  15. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William


    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  16. The algebra of non-local charges in non-linear sigma models

    Abdalla, E.; Abdalla, M.C.B.; Brunelli, J.C.; Zadra, A.


    It is derived the complete Dirac algebra satisfied by non-local charges conserved in non-linear sigma models. Some examples of calculation are given for the O(N) symmetry group. The resulting algebra corresponds to a saturated cubic deformation (with only maximum order terms) of the Kac-Moody algebra. The results are generalized for when a Wess-Zumino term be present. In that case the algebra contains a minor order correction (sub-saturation). (author). 1 ref

  17. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  18. Multiple Linear Regression Model for Estimating the Price of a ...

    Ghana Mining Journal ... In the modeling, the Ordinary Least Squares (OLS) normality assumption which could introduce errors in the statistical analyses was dealt with by log transformation of the data, ensuring the data is normally ... The resultant MLRM is: Ŷi MLRM = (X'X)-1X'Y(xi') where X is the sample data matrix.

  19. Performances of some estimators of linear model with ...

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  20. Minimal agent based model for financial markets II. Statistical properties of the linear and multiplicative dynamics

    Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.


    We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.

  1. Performance study of Active Queue Management methods: Adaptive GRED, REDD, and GRED-Linear analytical model

    Hussein Abdel-jaber


    Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.

  2. Ajuste de modelos estocásticos lineares e não-lineares para a descrição do perfil longitudinal de árvores Fitting linear and nonlinear stochastic models to describe longitudinal tree profile

    Leonardo Machado Pires


    Full Text Available Os modelos polinomiais são mais difundidos no meio florestal brasileiro na descrição do perfil de árvores devido à sua facilidade de ajuste e precisão. O mesmo não ocorre com os modelos não-lineares, os quais possuem maior dificuldade de ajuste. Dentre os modelos não-lineares clássicos, na descrição do perfil, podem-se citar o de Gompertz, o Logístico e o de Weibull. Portanto, este estudo visou comparar os modelos lineares e não lineares para a descrição do perfil de árvores. As medidas de comparação foram o coeficiente de determinação (R², o erro-padrão residual (s yx, o coeficiente de determinação corrigido (R²ajustado, o gráfico dos resíduos e a facilidade de ajuste. Os resultados ressaltaram que, dentre os modelos não-lineares, o que obteve melhor desempenho, de forma geral, foi o modelo Logístico, apesar de o modelo de Gompertz ser melhor em termos de erro-padrão residual. Nos modelos lineares, o polinômio proposto por Pires & Calegario foi superior aos demais. Ao comparar os modelos não-lineares com os lineares, o modelo Logístico foi melhor em razão, principalmente, do fato de o comportamento dos dados ser não-linear, à baixa correlação entre os parâmetros e à fácil interpretação deles, facilitando a convergência e o ajuste.Polynomial models are most commonly used in Brazilian forestry for taper modeling due to its straightforwardly fitting and precision. The use of nonlinear regression classic models, like Gompertz, Logistic and Weibull, is not very common in Brazil. Therefore, this study aimed to verify the best nonlinear and linear models, and among these the best model to describe the longitudinal tree profile. The comparison measures were: R², syx, R²adjusted, residual graphics and fitting convergence. The results pointed out that among the non-linear models the best behavior, in general, was given by the Logistic model, although the Gompertz model was superior compared with the Weibull

  3. Numerical study of corner separation in a linear compressor cascade using various turbulence models

    Liu Yangwei


    Full Text Available Three-dimensional corner separation is a common phenomenon that significantly affects compressor performance. Turbulence model is still a weakness for RANS method on predicting corner separation flow accurately. In the present study, numerical study of corner separation in a linear highly loaded prescribed velocity distribution (PVD compressor cascade has been investigated using seven frequently used turbulence models. The seven turbulence models include Spalart–Allmaras model, standard k–ɛ model, realizable k–ɛ model, standard k–ω model, shear stress transport k–ω model, v2–f model and Reynolds stress model. The results of these turbulence models have been compared and analyzed in detail with available experimental data. It is found the standard k–ɛ model, realizable k–ɛ model, v2–f model and Reynolds stress model can provide reasonable results for predicting three dimensional corner separation in the compressor cascade. The Spalart–Allmaras model, standard k–ω model and shear stress transport k–ω model overestimate corner separation region at incidence of 0°. The turbulence characteristics are discussed and turbulence anisotropy is observed to be stronger in the corner separating region.

  4. Diet models with linear goal programming: impact of achievement functions.

    Gerdessen, J C; de Vries, J H M


    Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.

  5. Developing ontological model of computational linear algebra - preliminary considerations

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Lirkov, I.


    The aim of this paper is to propose a method for application of ontologically represented domain knowledge to support Grid users. The work is presented in the context provided by the Agents in Grid system, which aims at development of an agent-semantic infrastructure for efficient resource management in the Grid. Decision support within the system should provide functionality beyond the existing Grid middleware, specifically, help the user to choose optimal algorithm and/or resource to solve a problem from a given domain. The system assists the user in at least two situations. First, for users without in-depth knowledge about the domain, it should help them to select the method and the resource that (together) would best fit the problem to be solved (and match the available resources). Second, if the user explicitly indicates the method and the resource configuration, it should "verify" if her choice is consistent with the expert recommendations (encapsulated in the knowledge base). Furthermore, one of the goals is to simplify the use of the selected resource to execute the job; i.e., provide a user-friendly method of submitting jobs, without required technical knowledge about the Grid middleware. To achieve the mentioned goals, an adaptable method of expert knowledge representation for the decision support system has to be implemented. The selected approach is to utilize ontologies and semantic data processing, supported by multicriterial decision making. As a starting point, an area of computational linear algebra was selected to be modeled, however, the paper presents a general approach that shall be easily extendable to other domains.

  6. Genetic demixing and evolution in linear stepping stone models

    Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.


    Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial

  7. Application of linear and non-linear low-Re k-ε models in two-dimensional predictions of convective heat transfer in passages with sudden contractions

    Raisee, M.; Hejazi, S.H.


    This paper presents comparisons between heat transfer predictions and measurements for developing turbulent flow through straight rectangular channels with sudden contractions at the mid-channel section. The present numerical results were obtained using a two-dimensional finite-volume code which solves the governing equations in a vertical plane located at the lateral mid-point of the channel. The pressure field is obtained with the well-known SIMPLE algorithm. The hybrid scheme was employed for the discretization of convection in all transport equations. For modeling of the turbulence, a zonal low-Reynolds number k-ε model and the linear and non-linear low-Reynolds number k-ε models with the 'Yap' and 'NYP' length-scale correction terms have been employed. The main objective of present study is to examine the ability of the above turbulence models in the prediction of convective heat transfer in channels with sudden contraction at a mid-channel section. The results of this study show that a sudden contraction creates a relatively small recirculation bubble immediately downstream of the channel contraction. This separation bubble influences the distribution of local heat transfer coefficient and increases the heat transfer levels by a factor of three. Computational results indicate that all the turbulence models employed produce similar flow fields. The zonal k-ε model produces the wrong Nusselt number distribution by underpredicting heat transfer levels in the recirculation bubble and overpredicting them in the developing region. The linear low-Re k-ε model, on the other hand, returns the correct Nusselt number distribution in the recirculation region, although it somewhat overpredicts heat transfer levels in the developing region downstream of the separation bubble. The replacement of the 'Yap' term with the 'NYP' term in the linear low-Re k-ε model results in a more accurate local Nusselt number distribution. Moreover, the application of the non-linear k

  8. Engineering model cryocooler test results

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.


    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  9. Three dimensional force prediction in a model linear brushless dc motor

    Moghani, J.S.; Eastham, J.F.; Akmese, R.; Hill-Cottingham, R.J. (Univ. of Bath (United Kingdom). School of Electronic and Electric Engineering)


    Practical results are presented for the three axes forces produced on the primary of a linear brushless dc machine which is supplied from a three-phase delta-modulated inverter. Conditions of both lateral alignment and lateral displacement are considered. Finite element analysis using both two and three dimensional modeling is compared with the practical results. It is shown that a modified two dimensional model is adequate, where it can be used, in the aligned position and that the full three dimensional method gives good results when the machine is axially misaligned.

  10. Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization

    Xiaobing Kong


    Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.

  11. Linear non-threshold (LNT) radiation hazards model and its evaluation

    Min Rui


    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  12. A new approach to modeling linear accelerator systems

    Gillespie, G.H.; Hill, B.W.; Jameson, R.A.


    A novel computer code is being developed to generate system level designs of radiofrequency ion accelerators with specific applications to machines of interest to Accelerator Driven Transmutation Technologies (ADTT). The goal of the Accelerator System Model (ASM) code is to create a modeling and analysis tool that is easy to use, automates many of the initial design calculations, supports trade studies used in accessing alternate designs and yet is flexible enough to incorporate new technology concepts as they emerge. Hardware engineering parameters and beam dynamics are to be modeled at comparable levels of fidelity. Existing scaling models of accelerator subsystems were used to produce a prototype of ASM (version 1.0) working within the Shell for Particle Accelerator Related Code (SPARC) graphical user interface. A small user group has been testing and evaluating the prototype for about a year. Several enhancements and improvements are now being developed. The current version of ASM is described and examples of the modeling and analysis capabilities are illustrated. The results of an example study, for an accelerator concept typical of ADTT applications, is presented and sample displays from the computer interface are shown

  13. Linear elastic obstacles: analysis of experimental results in the case of stress dependent pre-exponentials

    Surek, T.; Kuon, L.G.; Luton, M.J.; Jones, J.J.


    For the case of linear elastic obstacles, the analysis of experimental plastic flow data is shown to have a particularly simple form when the pre-exponential factor is a single-valued function of the modulus-reduced stress. The analysis permits the separation of the stress and temperature dependence of the strain rate into those of the pre-exponential factor and the activation free energy. As a consequence, the true values of the activation enthalpy, volume and entropy also are obtained. The approach is applied to four sets of experimental data, including Zr, and the results for the pre-exponential term are examined for self-consistency in view of the assumed functional dependence

  14. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    Fang, Tingting; Lahdelma, Risto


    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  15. Estimating mass of σ-meson and study on application of the linear σ-model

    Ding Yibing; Li Xin; Li Xueqian; Liu Xiang; Shen Hong; Shen Pengnian; Wang Guoli; Zeng Xiaoqiang


    Whether the σ-meson (f 0 (600)) exists as a real particle is a long-standing problem in both particle physics and nuclear physics. In this work, we analyse the deuteron binding energy in the linear σ-model and by fitting the data, we are able to determine the range of m σ and also investigate applicability of the linear σ-model for the interaction between hadrons in the energy region of MeVs. Our result shows that the best fit to the data of the deuteron binding energy and others advocates a narrow range for the σ-meson mass as 520 ≤ m σ ≤ 580 MeV and the concrete values depend on the input parameters such as the couplings. Inversely by fitting the experimental data, one can set constraints on the couplings and the other relevant phenomenological parameters in the model

  16. Radio-over-fiber linearization with optimized genetic algorithm CPWL model.

    Mateo, Carlos; Carro, Pedro L; García-Dúcar, Paloma; De Mingo, Jesús; Salinas, Íñigo


    This article proposes an optimized version of a canonical piece-wise-linear (CPWL) digital predistorter in order to enhance the linearity of a radio-over-fiber (RoF) LTE mobile fronthaul. In this work, we propose a threshold allocation optimization process carried out by a genetic algorithm (GA) in order to optimize the CPWL model (GA-CPWL). Firstly, experiments show how the CPWL model outperforms the classical memory polynomial DPD in an intensity modulation/direct detection (IM/DD) RoF link. Then, the GA-CPWL predistorter is compared with the CPWL model in several scenarios, in order to verify that the proposed DPD offers better performance in different optical transmission conditions. Experimental results reveal that with a proper threshold allocation, the GA-CPWL predistorter offers very promising outcomes.

  17. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz


    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  18. Downscaling of rainfall in Peru using Generalised Linear Models

    Bergin, E.; Buytaert, W.; Onof, C.; Wheater, H.


    The assessment of water resources in the Peruvian Andes is particularly important because the Peruvian economy relies heavily on agriculture. Much of the agricultural land is situated near to the coast and relies on large quantities of water for irrigation. The simulation of synthetic rainfall series is thus important to evaluate the reliability of water supplies for current and future scenarios of climate change. In addition to water resources concerns, there is also a need to understand extreme heavy rainfall events, as there was significant flooding in Machu Picchu in 2010. The region exhibits a reduction of rainfall in 1983, associated with El Nino Southern Oscillation (SOI). NCEP Reanalysis 1 data was used to provide weather variable data. Correlations were calculated for several weather variables using raingauge data in the Andes. These were used to evaluate teleconnections and provide suggested covariates for the downscaling model. External covariates used in the model include sea level pressure and sea surface temperature over the region of the Humboldt Current. Relative humidity and temperature data over the region are also included. The SOI teleconnection is also used. Covariates are standardised using observations for 1960-1990. The GlimClim downscaling model was used to fit a stochastic daily rainfall model to 13 sites in the Peruvian Andes. Results indicate that the model is able to reproduce rainfall statistics well, despite the large area used. Although the correlation between individual rain gauges is generally quite low, all sites are affected by similar weather patterns. This is an assumption of the GlimClim downscaling model. Climate change scenarios are considered using several GCM outputs for the A1B scenario. GCM data was corrected for bias using 1960-1990 outputs from the 20C3M scenario. Rainfall statistics for current and future scenarios are compared. The region shows an overall decrease in mean rainfall but with an increase in variance.

  19. An Introduction to the Use of Linear Models with Correlated Data

    Benoît Laplante


    conventional methods for estimating the variances of these estimates may yield biased results. These two problems are different, but they are related. This paper provides an introduction to the problems caused by correlated data and to possible solutions to these problems. First, we present the two problems and try to specify the relations between the two as clearly as possible. Second, we provide a critical presentation of random effects, mixed effects and hierarchical models that would help researchers to see their relevance in other kinds of linear models, particularly the so-called measurement models.

  20. Non-linear σ-models and string theories

    Sen, A.


    The connection between σ-models and string theories is discussed, as well as how the σ-models can be used as tools to prove various results in string theories. Closed bosonic string theory in the light cone gauge is very briefly introduced. Then, closed bosonic string theory in the presence of massless background fields is discussed. The light cone gauge is used, and it is shown that in order to obtain a Lorentz invariant theory, the string theory in the presence of background fields must be described by a two-dimensional conformally invariant theory. The resulting constraints on the background fields are found to be the equations of motion of the string theory. The analysis is extended to the case of the heterotic string theory and the superstring theory in the presence of the massless background fields. It is then shown how to use these results to obtain nontrivial solutions to the string field equations. Another application of these results is shown, namely to prove that the effective cosmological constant after compactification vanishes as a consequence of the classical equations of motion of the string theory. 34 refs

  1. Revisited global drift fluid model for linear devices

    Reiser, Dirk


    The problem of energy conserving global drift fluid simulations is revisited. It is found that for the case of cylindrical plasmas in a homogenous magnetic field, a straightforward reformulation is possible avoiding simplifications leading to energetic inconsistencies. The particular new feature is the rigorous treatment of the polarisation drift by a generalization of the vorticity equation. The resulting set of model equations contains previous formulations as limiting cases and is suitable for efficient numerical techniques. Examples of applications on studies of plasma blobs and its impact on plasma target interaction are presented. The numerical studies focus on the appearance of plasma blobs and intermittent transport and its consequences on the release of sputtered target materials in the plasma. Intermittent expulsion of particles in radial direction can be observed and it is found that although the neutrals released from the target show strong fluctuations in their propagation into the plasma column, the overall effect on time averaged profiles is negligible for the conditions considered. In addition, the numerical simulations are utilised to perform an a-posteriori assessment of the magnitude of energetic inconsistencies in previously used simplified models. It is found that certain popular approximations, in particular by the use of simplified vorticity equations, do not significantly affect energetics. However, popular model simplifications with respect to parallel advection are found to provide significant deterioration of the model consistency.

  2. Kalman filtering and smoothing for linear wave equations with model error

    Lee, Wonjung; McDougall, D; Stuart, A M


    Filtering is a widely used methodology for the incorporation of observed data into time-evolving systems. It provides an online approach to state estimation inverse problems when data are acquired sequentially. The Kalman filter plays a central role in many applications because it is exact for linear systems subject to Gaussian noise, and because it forms the basis for many approximate filters which are used in high-dimensional systems. The aim of this paper is to study the effect of model error on the Kalman filter, in the context of linear wave propagation problems. A consistency result is proved when no model error is present, showing recovery of the true signal in the large data limit. This result, however, is not robust: it is also proved that arbitrarily small model error can lead to inconsistent recovery of the signal in the large data limit. If the model error is in the form of a constant shift to the velocity, the filtering and smoothing distributions only recover a partial Fourier expansion, a phenomenon related to aliasing. On the other hand, for a class of wave velocity model errors which are time dependent, it is possible to recover the filtering distribution exactly, but not the smoothing distribution. Numerical results are presented which corroborate the theory, and also propose a computational approach which overcomes the inconsistency in the presence of model error, by relaxing the model

  3. New results on the mathematical problems in nonlinear physics; Nuevos resultados sobre problemas matematicos en fisica no-linear



    The main topics treated in this report are: I) Existence of generalized Lagrangians. II) Conserved densities for odd-order polynomial evolution equations and linear evolution systems. III ) Conservation laws for Klein-Gordon, Di rae and Maxwell equations. IV) Stability conditions for finite-energy solutions of a non-linear Klein-Gordon equation. V) Hamiltonian approach to non-linear evolution equations and Backlund transformations. VI) Anharmonic vibrations: Status of results and new possible approaches. (Author) 83 refs.


    Constantin ANGHELACHE


    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  5. EURADOS intercomparison exercise on Monte Carlo modelling of a medical linear accelerator.

    Caccia, Barbara; Le Roy, Maïwenn; Blideanu, Valentin; Andenna, Claudio; Arun, Chairmadurai; Czarnecki, Damian; El Bardouni, Tarek; Gschwind, Régine; Huot, Nicolas; Martin, Eric; Zink, Klemens; Zoubair, Mariam; Price, Robert; de Carlan, Loïc


    In radiotherapy, Monte Carlo (MC) methods are considered a gold standard to calculate accurate dose distributions, particularly in heterogeneous tissues. EURADOS organized an international comparison with six participants applying different MC models to a real medical linear accelerator and to one homogeneous and four heterogeneous dosimetric phantoms. The aim of this exercise was to identify, by comparison of different MC models with a complete experimental dataset, critical aspects useful for MC users to build and calibrate a simulation and perform a dosimetric analysis. Results show on average a good agreement between simulated and experimental data. However, some significant differences have been observed especially in presence of heterogeneities. Moreover, the results are critically dependent on the different choices of the initial electron source parameters. This intercomparison allowed the participants to identify some critical issues in MC modelling of a medical linear accelerator. Therefore, the complete experimental dataset assembled for this intercomparison will be available to all the MC users, thus providing them an opportunity to build and calibrate a model for a real medical linear accelerator.

  6. A wild model of linear arithmetic and discretely ordered modules

    Glivický, Petr; Pudlák, Pavel


    Roč. 63, č. 6 (2017), s. 501-508 ISSN 0942-5616 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : linear arithmetics Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.250, year: 2016

  7. Evaluation of linear induction motor characteristics : the Yamamura model


    The Yamamura theory of the double-sided linear induction motor (LIM) excited by a constant current source is discussed in some detail. The report begins with a derivation of thrust and airgap power using the method of vector potentials and theorem of...

  8. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Holdaway, Daniel; Kent, James


    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  9. Underprediction of human skin erythema at low doses per fraction by the linear quadratic model

    Hamilton, Christopher S.; Denham, James W.; O'Brien, Maree; Ostwald, Patricia; Kron, Tomas; Wright, Suzanne; Doerr, Wolfgang


    Background and purpose. The erythematous response of human skin to radiotherapy has proven useful for testing the predictions of the linear quadratic (LQ) model in terms of fractionation sensitivity and repair half time. No formal investigation of the response of human skin to doses less than 2 Gy per fraction has occurred. This study aims to test the validity of the LQ model for human skin at doses ranging from 0.4 to 5.2 Gy per fraction. Materials and methods. Complete erythema reaction profiles were obtained using reflectance spectrophotometry in two patient populations: 65 patients treated palliatively with 5, 10, 12 and 20 daily treatment fractions (varying thicknesses of bolus, various body sites) and 52 patients undergoing prostatic irradiation for localised carcinoma of the prostate (no bolus, 30-32 fractions). Results and conclusions. Gender, age, site and prior sun exposure influence pre- and post-treatment erythema values independently of dose administered. Out-of-field effects were also noted. The linear quadratic model significantly underpredicted peak erythema values at doses less than 1.5 Gy per fraction. This suggests that either the conventional linear quadratic model does not apply for low doses per fraction in human skin or that erythema is not exclusively initiated by radiation damage to the basal layer. The data are potentially explained by an induced repair model

  10. An improved risk-explicit interval linear programming model for pollution load allocation for watershed management.

    Xia, Bisheng; Qian, Xin; Yao, Hong


    Although the risk-explicit interval linear programming (REILP) model has solved the problem of having interval solutions, it has an equity problem, which can lead to unbalanced allocation between different decision variables. Therefore, an improved REILP model is proposed. This model adds an equity objective function and three constraint conditions to overcome this equity problem. In this case, pollution reduction is in proportion to pollutant load, which supports balanced development between different regional economies. The model is used to solve the problem of pollution load allocation in a small transboundary watershed. Compared with the REILP original model result, our model achieves equity between the upstream and downstream pollutant loads; it also overcomes the problem of greatest pollution reduction, where sources are nearest to the control section. The model provides a better solution to the problem of pollution load allocation than previous versions.

  11. Huffman and linear scanning methods with statistical language models.

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris


    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  12. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    Nevill, Alan M; Cooke, Carlton B


    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  13. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Shangli Zhang


    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  14. Preisach hysteresis model for non-linear 2D heat diffusion

    Jancskar, Ildiko; Ivanyi, Amalia


    This paper analyzes a non-linear heat diffusion process when the thermal diffusivity behaviour is a hysteretic function of the temperature. Modelling this temperature dependence, the discrete Preisach algorithm as general hysteresis model has been integrated into a non-linear multigrid solver. The hysteretic diffusion shows a heating-cooling asymmetry in character. The presented type of hysteresis speeds up the thermal processes in the modelled systems by a very interesting non-linear way

  15. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Drzewiecki, Wojciech


    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  16. Modeling and non-linear responses of MEMS capacitive accelerometer

    Sri Harsha C.


    Full Text Available A theoretical investigation of an electrically actuated beam has been illustrated when the electrostatic-ally actuated micro-cantilever beam is separated from the electrode by a moderately large gap for two distinct types of geometric configurations of MEMS accelerometer. Higher order nonlinear terms have been taken into account for studying the pull in voltage analysis. A nonlinear model of gas film squeezing damping, another source of nonlinearity in MEMS devices is included in obtaining the dynamic responses. Moreover, in the present work, the possible source of nonlinearities while formulating the mathematical model of a MEMS accelerometer and their influences on the dynamic responses have been investigated. The theoretical results obtained by using MATLAB has been verified with the results obtained in FE software and has been found in good agreement. Criterion towards stable micro size accelerometer for each configuration has been investigated. This investigation clearly provides an understanding of nonlinear static and dynamics characteristics of electrostatically micro cantilever based device in MEMS.

  17. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    Zörnig, Peter


    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.


    Klumpp, A. R.


    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  19. A Comparison of Alternative Estimators of Linearly Aggregated Macro Models

    Fikri Akdeniz


    Full Text Available Normal 0 false false false TR X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-ansi-language:TR; mso-fareast-language:TR;} This paper deals with the linear aggregation problem. For the true underlying micro relations, which explain the micro behavior of the individuals, no restrictive rank conditions are assumed. Thus the analysis is presented in a framework utilizing generalized inverses of singular matrices. We investigate several estimators for certain linear transformations of the systematic part of the corresponding macro relations. Homogeneity of micro parameters is discussed. Best linear unbiased estimation for micro parameters is described.

  20. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    H. Vazquez-Leal


    Full Text Available We present a homotopy continuation method (HCM for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.

  1. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng


    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  2. Examining secular trend  and seasonality in count data using dynamic generalized linear modelling

    Lundbye-Christensen, Søren; Dethlefsen, Claus; Gorst-Rasmussen, Anders

    series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. Data  In the period January 1980 to 1999, 17,989 incidents of acute myocardial infarction were recorded in the county of Northern Jutland, Denmark. Records were......Aims  Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation and overdispersion. Methods  The model resembles an ordinary time...... updated daily. Results  The model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicate a good model fit. The analysis with the dynamic model revealed peaks coinciding with influenza epidemics. On average the peak-to-trough ratio is estimated...

  3. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Younes Elahi


    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  4. A multi-dimensional dynamic linear model for monitoring slaughter pig production

    Jensen, Dan Børge; Cornou, Cecile; Toft, Nils

    Scientists and farmers still lack an efficient way to unify the large number of different types of data series, which are increasingly being generated in relation to automatic herd monitoring. Such a unifying model should be able to account for the correlations between the various types of data......, feed-and water consumption), measured at different levels of detail (individual pig and double-pen level) and with different observational frequencies (weekly and daily), using series collected for the Danish PigIT project. The presented three-dimensional model serves as a proof of concept......, resulting in a model which could potentially yield more information than can be gained from the individual components separately. Here we present such a model for monitoring slaughter pig production, in the form of a multivariate dynamic linear model. This model unifies three types of data (live weight...

  5. Modelling the influence of sensory dynamics on linear and nonlinear driver steering control

    Nash, C. J.; Cole, D. J.


    A recent review of the literature has indicated that sensory dynamics play an important role in the driver-vehicle steering task, motivating the design of a new driver model incorporating human sensory systems. This paper presents a full derivation of the linear driver model developed in previous work, and extends the model to control a vehicle with nonlinear tyres. Various nonlinear controllers and state estimators are compared with different approximations of the true system dynamics. The model simulation time is found to increase significantly with the complexity of the controller and state estimator. In general the more complex controllers perform best, although with certain vehicle and tyre models linearised controllers perform as well as a full nonlinear optimisation. Various extended Kalman filters give similar results, although the driver's sensory dynamics reduce control performance compared with full state feedback. The new model could be used to design vehicle systems which interact more naturally and safely with a human driver.

  6. Optics Studies for the CERN Proton Synchrotron Machine Linear and Nonlinear Modelling using Beam Based Measurements

    Cappi, R; Martini, M; Métral, Elias; Métral, G; Steerenberg, R; Müller, A S


    The CERN Proton Synchrotron machine is built using combined function magnets. The control of the linear tune as well as the chromaticity in both planes is achieved by means of special coils added to the main magnets, namely two pole-face-windings and one figure-of-eight loop. As a result, the overall magnetic field configuration is rather complex not to mention the saturation effects induced at top-energy. For these reasons a linear model of the PS main magnet does not provide sufficient precision to model particle dynamics. On the other hand, a sophisticated optical model is the key element for the foreseen intensity upgrade and, in particular, for the novel extraction mode based on adiabatic capture of beam particles inside stable islands in transverse phase space. A solution was found by performing accurate measurement of the nonlinear tune as a function of both amplitude and momentum offset so to extract both linear and nonlinear properties of the lattice. In this paper the measurement results are present...

  7. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Ludbrook, John


    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  8. Nonlinear shear behavior of rock joints using a linearized implementation of the Barton–Bandis model

    Simon Heru Prassetyo


    Full Text Available Experiments on rock joint behaviors have shown that joint surface roughness is mobilized under shearing, inducing dilation and resulting in nonlinear joint shear strength and shear stress vs. shear displacement behaviors. The Barton–Bandis (BB joint model provides the most realistic prediction for the nonlinear shear behavior of rock joints. The BB model accounts for asperity roughness and strength through the joint roughness coefficient (JRC and joint wall compressive strength (JCS parameters. Nevertheless, many computer codes for rock engineering analysis still use the constant shear strength parameters from the linear Mohr–Coulomb (M−C model, which is only appropriate for smooth and non-dilatant joints. This limitation prevents fractured rock models from capturing the nonlinearity of joint shear behavior. To bridge the BB and the M−C models, this paper aims to provide a linearized implementation of the BB model using a tangential technique to obtain the equivalent M−C parameters that can satisfy the nonlinear shear behavior of rock joints. These equivalent parameters, namely the equivalent peak cohesion, friction angle, and dilation angle, are then converted into their mobilized forms to account for the mobilization and degradation of JRC under shearing. The conversion is done by expressing JRC in the equivalent peak parameters as functions of joint shear displacement using proposed hyperbolic and logarithmic functions at the pre- and post-peak regions of shear displacement, respectively. Likewise, the pre- and post-peak joint shear stiffnesses are derived so that a complete shear stress-shear displacement relationship can be established. Verifications of the linearized implementation of the BB model show that the shear stress-shear displacement curves, the dilation behavior, and the shear strength envelopes of rock joints are consistent with available experimental and numerical results.

  9. Skinfold creep under load of caliper. Linear visco- and poroelastic model simulations.

    Nowak, Joanna; Nowak, Bartosz; Kaczmarek, Mariusz


    This paper addresses the diagnostic idea proposed in [11] to measure the parameter called rate of creep of axillary fold of tissue using modified Harpenden skinfold caliper in order to distinguish normal and edematous tissue. Our simulations are intended to help understanding the creep phenomenon and creep rate parameter as a sensitive indicator of edema existence. The parametric analysis shows the tissue behavior under the external load as well as its sensitivity to changes of crucial hydro-mechanical tissue parameters, e.g., permeability or stiffness. The linear viscoelastic and poroelastic models of normal (single phase) and oedematous tissue (twophase: swelled tissue with excess of interstitial fluid) implemented in COMSOL Multiphysics environment are used. Simulations are performed within the range of small strains for a simplified fold geometry, material characterization and boundary conditions. The predicted creep is the result of viscosity (viscoelastic model) or pore fluid displacement (poroelastic model) in tissue. The tissue deformations, interstitial fluid pressure as well as interstitial fluid velocity are discussed in parametric analysis with respect to elasticity modulus, relaxation time or permeability of tissue. The creep rate determined within the models of tissue is compared and referred to the diagnostic idea in [11]. The results obtained from the two linear models of subcutaneous tissue indicate that the form of creep curve and the creep rate are sensitive to material parameters which characterize the tissue. However, the adopted modelling assumptions point to a limited applicability of the creep rate as the discriminant of oedema.




    Full Text Available Circular data are data which the value in form of vector is circular data. Statistic analysis that is used in analyzing circular data is circular statistics analysis. In regression analysis, if any of predictor or response variables or both are circular then the regression analysis used is called circular regression analysis. Observation data in circular statistic which use direction and time units usually don’t satisfy all of the parametric assumptions, thus making nonparametric regression as a good solution. Nonparametric regression function estimation is using epanechnikov kernel estimator for the linier variables and von Mises kernel estimator for the circular variable. This study showed that the result of circular analysis by using circular descriptive statistic is better than common statistic. Multiple circular-linier nonparametric regressions with Epanechnikov and von Mises kernel estimator didn’t create estimation model explicitly as parametric regression does, but create estimation from its observation knots instead.

  11. Spatial Modeling of Flood Duration in Amazonian Floodplains Through Radar Remote Sensing and Generalized Linear Models

    Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.


    Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with

  12. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko


    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Muayad Al-Qaisy


    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  14. A numerical study of linear and nonlinear kinematic models in fish swimming with the DSD/SST method

    Tian, Fang-Bao


    Flow over two fish (modeled by two flexible plates) in tandem arrangement is investigated by solving the incompressible Navier-Stokes equations numerically with the DSD/SST method to understand the differences between the geometrically linear and nonlinear models. In the simulation, the motions of the plates are reconstructed from a vertically flowing soap film tunnel experiment with linear and nonlinear kinematic models. Based on the simulations, the drag, lift, power consumption, vorticity and pressure fields are discussed in detail. It is found that the linear and nonlinear models are able to reasonably predict the forces and power consumption of a single plate in flow. Moreover, if multiple plates are considered, these two models yield totally different results, which implies that the nonlinear model should be used. The results presented in this work provide a guideline for future studies in fish swimming.

  15. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Liu, Yan; Cai, Wensheng; Shao, Xueguang


    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  16. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    Mott, J.E.; King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D.


    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system

  17. A Non-Linear Finite Element Model for the LHC Main Dipole Coil Cross-Section

    Pojer, M; Scandale, Walter


    The production of the dipole magnets for the Large Hadron Collider is at its final stage. Nevertheless, some mechanical instabilities are still observed for which no clear explanation has been found yet. A FE modelization of the dipole cold mass cross-section had already been developed at CERN, mainly for magnetic analysis, taking into account conductor blocks and a frictionless behavior. This paper describes a new ANSYS® model of the dipole coil cross-section, featuring individual turns inside conductor blocks, and implementing friction and the mechanical non-linear behavior of insulated cables. Preliminary results, comparison with measurements performed in industry and ongoing developments are discussed.

  18. A universal, fault-tolerant, non-linear analytic network for modeling and fault detection

    Mott, J.E. [Advanced Modeling Techniques Corp., Idaho Falls, ID (United States); King, R.W.; Monson, L.R.; Olson, D.L.; Staffon, J.D. [Argonne National Lab., Idaho Falls, ID (United States)


    The similarities and differences of a universal network to normal neural networks are outlined. The description and application of a universal network is discussed by showing how a simple linear system is modeled by normal techniques and by universal network techniques. A full implementation of the universal network as universal process modeling software on a dedicated computer system at EBR-II is described and example results are presented. It is concluded that the universal network provides different feature recognition capabilities than a neural network and that the universal network can provide extremely fast, accurate, and fault-tolerant estimation, validation, and replacement of signals in a real system.

  19. Linear identification and model adjustment of a PEM fuel cell stack

    Kunusch, C; Puleston, P F; More, J J [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M A [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)


    In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)

  20. Linear time series modeling of GPS-derived TEC observations over the Indo-Thailand region

    Suraj, Puram Sai; Kumar Dabbakuti, J. R. K.; Chowdhary, V. Rajesh; Tripathi, Nitin K.; Ratnam, D. Venkata


    This paper proposes a linear time series model to represent the climatology of the ionosphere and to investigate the characteristics of hourly averaged total electron content (TEC). The GPS-TEC observation data at the Bengaluru international global navigation satellite system (GNSS) service (IGS) station (geographic 13.02°N , 77.57°E ; geomagnetic latitude 4.4°N ) have been utilized for processing the TEC data during an extended period (2009-2016) in the 24{th} solar cycle. Solar flux F10.7p index, geomagnetic Ap index, and periodic oscillation factors have been considered to construct a linear TEC model. It is evident from the results that solar activity effect on TEC is high. It reaches the maximum value (˜ 40 TECU) during the high solar activity (HSA) year (2014) and minimum value (˜ 15 TECU) during the low solar activity (LSA) year (2009). The larger magnitudes of semiannual variations are observed during the HSA periods. The geomagnetic effect on TEC is relatively low, with the highest being ˜ 4 TECU (March 2015). The magnitude of periodic variations can be seen more significantly during HSA periods (2013-2015) and less during LSA periods (2009-2011). The correlation coefficient of 0.89 between the observations and model-based estimations has been found. The RMSE between the observed TEC and model TEC values is 4.0 TECU (linear model) and 4.21 TECU (IRI2016 Model). Further, the linear TEC model has been validated at different latitudes over the northern low-latitude region. The solar component (F10.7p index) value decreases with an increase in latitude. The magnitudes of the periodic component become less significant with the increase in latitude. The influence of geomagnetic component becomes less significant at Lucknow GNSS station (26.76°N, 80.88°E) when compared to other GNSS stations. The hourly averaged TEC values have been considered and ionospheric features are well recovered with linear TEC model.

  1. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    Thomas, C.; Lark, R. M.


    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  2. Utility of low-order linear nuclear-power-plant models in plant diagnostics and control

    Tylee, J.L.


    A low-order, linear model of a pressurized water reactor (PWR) plant is described and evaluated. The model consists of 23 linear, first-order difference equations and simulates all subsystems of both the primary and secondary sides of the plant. Comparisons between the calculated model response and available test data show the model to be an adequate representation of the actual plant dynamics. Suggested use for the model in an on-line digital plant diagnostics and control system are presented

  3. Modeling Single-Phase Inverter and Its Decentralized Coordinated Control by Using Feedback Linearization

    Renke Han


    Full Text Available It is a very crucial problem to make a microgrid operated reasonably and stably. Considering the nonlinear mathematics model of inverter established in this paper, the input-output feedback linearization method is used to transform the nonlinear mathematics model of inverters to a linear tracking synchronization and consensus regulation control problem. Based on the linear mathematics model and multiagent consensus algorithm, a decentralized coordinated controller is proposed to make amplitudes and angles of voltages from inverters be consensus and active and reactive power shared in the desired ratio. The proposed control is totally distributed because each inverter only requires local and one neighbor’s information with sparse communication structure based on multiagent system. The hybrid consensus algorithm is used to keep the amplitude of the output voltages following the leader and the angles of output voltage as consensus. Then the microgrid can be operated more efficiently and the circulating current between DGs can be effectively suppressed. The effectiveness of the proposed method is proved through simulation results of a typical microgrid system.

  4. Partially linear varying coefficient models stratified by a functional covariate

    Maity, Arnab; Huang, Jianhua Z.


    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  5. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J


    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  6. Dynamics and control of quadcopter using linear model predictive control approach

    Islam, M.; Okasha, M.; Idres, M. M.


    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  7. A linear time layout algorithm for business process models

    Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.


    The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is

  8. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.


    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  9. Restricted DCJ-indel model: sorting linear genomes with DCJ and indels


    Background The double-cut-and-join (DCJ) is a model that is able to efficiently sort a genome into another, generalizing the typical mutations (inversions, fusions, fissions, translocations) to which genomes are subject, but allowing the existence of circular chromosomes at the intermediate steps. In the general model many circular chromosomes can coexist in some intermediate step. However, when the compared genomes are linear, it is more plausible to use the so-called restricted DCJ model, in which we proceed the reincorporation of a circular chromosome immediately after its creation. These two consecutive DCJ operations, which create and reincorporate a circular chromosome, mimic a transposition or a block-interchange. When the compared genomes have the same content, it is known that the genomic distance for the restricted DCJ model is the same as the distance for the general model. If the genomes have unequal contents, in addition to DCJ it is necessary to consider indels, which are insertions and deletions of DNA segments. Linear time algorithms were proposed to compute the distance and to find a sorting scenario in a general, unrestricted DCJ-indel model that considers DCJ and indels. Results In the present work we consider the restricted DCJ-indel model for sorting linear genomes with unequal contents. We allow DCJ operations and indels with the following constraint: if a circular chromosome is created by a DCJ, it has to be reincorporated in the next step (no other DCJ or indel can be applied between the creation and the reincorporation of a circular chromosome). We then develop a sorting algorithm and give a tight upper bound for the restricted DCJ-indel distance. Conclusions We have given a tight upper bound for the restricted DCJ-indel distance. The question whether this bound can be reduced so that both the general and the restricted DCJ-indel distances are equal remains open. PMID:23281630

  10. Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime

    Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.


    We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.

  11. Linear models for assessing mechanisms of sperm competition: the trouble with transformations.

    Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K


    Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.

  12. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Claudimar Pereira da Veiga


    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  13. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Giuliano de Oliveira Freitas


    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  14. Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements

    Jesus M. de la Cruz


    Full Text Available This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost.

  15. Free-piston engine linear generator for hybrid vehicles modeling study

    Callahan, T. J.; Ingram, S. K.


    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  16. Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It

    Grünwald, P.; van Ommen, T.


    We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are

  17. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.


    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  18. Genomic prediction based on data from three layer lines using non-linear regression models

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.


    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  19. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon


    Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…

  20. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it

    P.D. Grünwald (Peter); T. van Ommen (Thijs)


    textabstractWe empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data