Validity of tests under covariate-adaptive biased coin randomization and generalized linear models.
Shao, Jun; Yu, Xinxin
2013-12-01
Some covariate-adaptive randomization methods have been used in clinical trials for a long time, but little theoretical work has been done about testing hypotheses under covariate-adaptive randomization until Shao et al. (2010) who provided a theory with detailed discussion for responses under linear models. In this article, we establish some asymptotic results for covariate-adaptive biased coin randomization under generalized linear models with possibly unknown link functions. We show that the simple t-test without using any covariate is conservative under covariate-adaptive biased coin randomization in terms of its Type I error rate, and that a valid test using the bootstrap can be constructed. This bootstrap test, utilizing covariates in the randomization scheme, is shown to be asymptotically as efficient as Wald's test correctly using covariates in the analysis. Thus, the efficiency loss due to not using covariates in the analysis can be recovered by utilizing covariates in covariate-adaptive biased coin randomization. Our theory is illustrated with two most popular types of discrete outcomes, binary responses and event counts under the Poisson model, and exponentially distributed continuous responses. We also show that an alternative simple test without using any covariate under the Poisson model has an inflated Type I error rate under simple randomization, but is valid under covariate-adaptive biased coin randomization. Effects on the validity of tests due to model misspecification is also discussed. Simulation studies about the Type I errors and powers of several tests are presented for both discrete and continuous responses. © 2013, The International Biometric Society.
Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.
Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L
2012-03-01
Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.
Fan, Liqiong; Yeatts, Sharon D; Wolf, Bethany J; McClure, Leslie A; Selim, Magdy; Palesch, Yuko Y
2018-01-01
Under covariate adaptive randomization, the covariate is tied to both randomization and analysis. Misclassification of such covariate will impact the intended treatment assignment; further, it is unclear what the appropriate analysis strategy should be. We explore the impact of such misclassification on the trial's statistical operating characteristics. Simulation scenarios were created based on the misclassification rate and the covariate effect on the outcome. Models including unadjusted, adjusted for the misclassified, or adjusted for the corrected covariate were compared using logistic regression for a binary outcome and Poisson regression for a count outcome. For the binary outcome using logistic regression, type I error can be maintained in the adjusted model, but the test is conservative using an unadjusted model. Power decreased with both increasing covariate effect on the outcome as well as the misclassification rate. Treatment effect estimates were biased towards the null for both the misclassified and unadjusted models. For the count outcome using a Poisson model, covariate misclassification led to inflated type I error probabilities and reduced power in the misclassified and the unadjusted model. The impact of covariate misclassification under covariate-adaptive randomization differs depending on the underlying distribution of the outcome.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Ye, Dan; Chen, Mengmeng; Li, Kui
2017-11-01
In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...
Qureshi, Nauman Khalid; Naseer, Noman; Noori, Farzan Majeed; Nazeer, Hammad; Khan, Rayyan Azam; Saleem, Sajid
2017-01-01
In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS) signals utilizable in a two-class [motor imagery (MI) and rest; mental rotation (MR) and rest] brain-computer interface (BCI) is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher ( p classification-performance fNIRS-BCI.
Directory of Open Access Journals (Sweden)
Nauman Khalid Qureshi
2017-07-01
Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.
Linear Versus Non-linear Supersymmetry, in General
Ferrara, Sergio; Van Proeyen, Antoine; Wrase, Timm
2016-01-01
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM's: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Linear zonal atmospheric prediction for adaptive optics
McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael
2000-07-01
We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.
Linear ubiquitination signals in adaptive immune responses.
Ikeda, Fumiyo
2015-07-01
Ubiquitin can form eight different linkage types of chains using the intrinsic Met 1 residue or one of the seven intrinsic Lys residues. Each linkage type of ubiquitin chain has a distinct three-dimensional topology, functioning as a tag to attract specific signaling molecules, which are so-called ubiquitin readers, and regulates various biological functions. Ubiquitin chains linked via Met 1 in a head-to-tail manner are called linear ubiquitin chains. Linear ubiquitination plays an important role in the regulation of cellular signaling, including the best-characterized tumor necrosis factor (TNF)-induced canonical nuclear factor-κB (NF-κB) pathway. Linear ubiquitin chains are specifically generated by an E3 ligase complex called the linear ubiquitin chain assembly complex (LUBAC) and hydrolyzed by a deubiquitinase (DUB) called ovarian tumor (OTU) DUB with linear linkage specificity (OTULIN). LUBAC linearly ubiquitinates critical molecules in the TNF pathway, such as NEMO and RIPK1. The linear ubiquitin chains are then recognized by the ubiquitin readers, including NEMO, which control the TNF pathway. Accumulating evidence indicates an importance of the LUBAC complex in the regulation of apoptosis, development, and inflammation in mice. In this article, I focus on the role of linear ubiquitin chains in adaptive immune responses with an emphasis on the TNF-induced signaling pathways. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Generalized Linear Models in Family Studies
Wu, Zheng
2005-01-01
Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…
On linear equations with general polynomial solutions
Laradji, A.
2018-04-01
We provide necessary and sufficient conditions for which an nth-order linear differential equation has a general polynomial solution. We also give necessary conditions that can directly be ascertained from the coefficient functions of the equation.
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control
Nguyen, Nhan T.; Boskovic, Jovan D.
2008-01-01
This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.
Generalized Cross-Gramian for Linear Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
2012-01-01
The cross-gramian is a well-known matrix with embedded controllability and observability information. The cross-gramian is related to the Hankel operator and the Hankel singular values of a linear square system and it has several interesting properties. These properties make the cross-gramian pop......The cross-gramian is a well-known matrix with embedded controllability and observability information. The cross-gramian is related to the Hankel operator and the Hankel singular values of a linear square system and it has several interesting properties. These properties make the cross......-gramian popular in several applications including model reduction, control configuration selection and sensitivity analysis. The ordinary cross-gramian which has been defined in the literature is the solution of a Sylvester equation. This Sylvester equation is not always solvable and therefore for some linear...... square symmetric systems, the ordinary cross-gramian does not exist. To cope with this problem, a new generalized cross-gramian is introduced in this paper. In contrast to the ordinary cross-gramian, the generalized cross-gramian can be easily obtained for general linear systems and therefore can be used...
Generalized linear model for partially ordered data.
Zhang, Qiang; Ip, Edward Haksing
2012-01-13
Within the rich literature on generalized linear models, substantial efforts have been devoted to models for categorical responses that are either completely ordered or completely unordered. Few studies have focused on the analysis of partially ordered outcomes, which arise in practically every area of study, including medicine, the social sciences, and education. To fill this gap, we propose a new class of generalized linear models--the partitioned conditional model--that includes models for both ordinal and unordered categorical data as special cases. We discuss the specification of the partitioned conditional model and its estimation. We use an application of the method to a sample of the National Longitudinal Study of Youth to illustrate how the new method is able to extract from partially ordered data useful information about smoking youths that is not possible using traditional methods. Copyright © 2011 John Wiley & Sons, Ltd.
Estimating classification images with generalized linear and additive models.
Knoblauch, Kenneth; Maloney, Laurence T
2008-12-22
Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.
General solution of linear vector supersymmetry
International Nuclear Information System (INIS)
Blasi, Alberto; Maggiore, Nicola
2007-01-01
We give the general solution of the Ward identity for the linear vector supersymmetry which characterizes all topological models. Such a solution, whose expression is quite compact and simple, greatly simplifies the study of theories displaying a supersymmetric algebraic structure, reducing to a few lines the proof of their possible finiteness. In particular, the cohomology technology, usually involved for the quantum extension of these theories, is completely bypassed. The case of Chern-Simons theory is taken as an example
Identification of general linear mechanical systems
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1983-01-01
Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.
Gravitational Wave in Linear General Relativity
Cubillos, D. J.
2017-07-01
General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.
Generalized continuous linear model of international trade
Directory of Open Access Journals (Sweden)
Kostenko Elena
2014-01-01
Full Text Available The probability-based approach to the linear model of international trade based on the theory of Markov processes with continuous time is analysed. A generalized continuous model of international trade is built, in which the transition of the system from state to state is described by linear differential equations. The methodology of how to obtain the intensity matrices, which are differential in nature, is shown, and the same is done for their corresponding transition matrices for processes of purchasing and selling. In the process of the creation of the continuous model, functions and operations of matrices were used in addition to the Laplace transform, which gave the analytical form of the transition matrices, and therefore the expressions for the state vectors of the system. The obtained expressions simplify analysis and calculations in comparison to other methods. The values of the continuous transition matrices include in themselves the results of discrete model of international trade at moments in time proportional to the time step. The continuous model improves the quality of planning and the effectiveness of control of international trade agreements.
Aspects of general linear modelling of migration.
Congdon, P
1992-01-01
"This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt
FAVORING PARTIES BY GENERAL LINEAR DIVISOR METHOD
Directory of Open Access Journals (Sweden)
Ion BOLUN
2016-03-01
Full Text Available Aspects of General Linear Divisor (GLDmethod favoring of parties, when distributing seats, are investigated. They are identified predisposing conditions of a particular party to favor parties and also the fact that predisposition of favoring smaller parties is increasing, and of favoring of larger parties is decreasing with the increase of constant c value. The condition of Hamilton equilibrium between two parties is defined and special cases of Hamilton equilibriumand quasi-equilibrium are described. Are identified areas of GLD method favoring of larger and of smaller parties depending on the number of parties and on values of constant c and ΔM. Onaverage, GLD method favors large parties at c 2 and did not favor any party at c = 2.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multivariate generalized linear model for genetic pleiotropy.
Schaid, Daniel J; Tong, Xingwei; Batzler, Anthony; Sinnwell, Jason P; Qing, Jiang; Biernacka, Joanna M
2017-12-16
When a single gene influences more than one trait, known as pleiotropy, it is important to detect pleiotropy to improve the biological understanding of a gene. This can lead to improved screening, diagnosis, and treatment of diseases. Yet, most current multivariate methods to evaluate pleiotropy test the null hypothesis that none of the traits are associated with a variant; departures from the null could be driven by just one associated trait. A formal test of pleiotropy should assume a null hypothesis that one or fewer traits are associated with a genetic variant. We recently developed statistical methods to analyze pleiotropy for quantitative traits having a multivariate normal distribution. We now extend this approach to traits that can be modeled by generalized linear models, such as analysis of binary, ordinal, or quantitative traits, or a mixture of these types of traits. Based on methods from estimating equations, we developed a new test for pleiotropy. We then extended the testing framework to a sequential approach to test the null hypothesis that $k+1$ traits are associated, given that the null of $k$ associated traits was rejected. This provides a testing framework to determine the number of traits associated with a genetic variant, as well as which traits, while accounting for correlations among the traits. By simulations, we illustrate the Type-I error rate and power of our new methods, describe how they are influenced by sample size, the number of traits, and the trait correlations, and apply the new methods to a genome-wide association study of multivariate traits measuring symptoms of major depression. Our new approach provides a quantitative assessment of pleiotropy, enhancing current analytic practice. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity ...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Rear-heavy car control by adaptive linear optimal preview
Thommyppillai, M.; Evangelou, S.; Sharp, R. S.
2010-05-01
Adaptive linear optimal preview control theory is applied to a simple but non-linear car model, with parameters chosen to make the rear axle saturate first in any quasi-steady manoeuvre. The tendency of such a car to spin above a critical speed, which is a function of its running state, causes control to be especially difficult when operating near to the limit of the rear-axle force system. As in previous work, trim states and optimal gains are computed off-line for a given speed and a full range of lateral accelerations. Gain-scheduling with interpolation over trims and gain sets is used to keep the control appropriate to the running conditions, as they change. Simulations of manoeuvres are used to test and demonstrate the system capability. It is shown that utilising the rear-axle lateral-slip ratio as the scheduling variable, in the case of this rear-heavy car, gives excellent tracking, even when the tyres are run close to full saturation. It is implied by this and previous work that the general case can be treated effectively by monitoring both front- and rear-axle slips and scheduling on a worst-case basis.
Asymptotics for generalized piecewise linear histograms
Czech Academy of Sciences Publication Activity Database
Berlinet, A.; Hobza, Tomáš; Vajda, Igor
2002-01-01
Roč. 34, č. 3 (2002), s. 3-19 ISSN 0041-9184 R&D Projects: GA ČR GA102/99/1137 Institutional research plan: CEZ:AV0Z1075907 Keywords : nonparametric density estimation * histogram * piecewise linear histogram Subject RIV: BB - Applied Statistics, Operational Research
Linear Perturbation Adaptive Control of Hydraulically Driven Manipulators
DEFF Research Database (Denmark)
Andersen, T.O.; Hansen, M.R.; Conrad, Finn
2004-01-01
control.Using the Lyapunov approach, under slowly time-varying assumptions, it is shown that the tracking error and the parameter error remain bounded. This bound is a function of the ideal parameters and a bounded disturbance. The control algorithm decouples and linearizes the manipulator so that each......A method for synthesis of a robust adaptive scheme for a hydraulically driven manipulator, that takes full advantage of any known system dynamics to simplify the adaptive control problem for the unknown portion of the dynamics is presented. The control method is based on adaptive perturbation...
Adaptation-II of the surrogate methods for linear programming ...
African Journals Online (AJOL)
Adaptation-II of the surrogate methods for linear programming problems. SO Oko. Abstract. No Abstract. Global Journal of Mathematical Sciences Vol. 5(1) 2006: 63-71. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/gjmas.v5i1.21381.
Adaptive ensemble Kalman filtering of non-linear systems
Directory of Open Access Journals (Sweden)
Tyrus Berry
2013-07-01
Full Text Available A necessary ingredient of an ensemble Kalman filter (EnKF is covariance inflation, used to control filter divergence and compensate for model error. There is an on-going search for inflation tunings that can be learned adaptively. Early in the development of Kalman filtering, Mehra (1970, 1972 enabled adaptivity in the context of linear dynamics with white noise model errors by showing how to estimate the model error and observation covariances. We propose an adaptive scheme, based on lifting Mehra's idea to the non-linear case, that recovers the model error and observation noise covariances in simple cases, and in more complicated cases, results in a natural additive inflation that improves state estimation. It can be incorporated into non-linear filters such as the extended Kalman filter (EKF, the EnKF and their localised versions. We test the adaptive EnKF on a 40-dimensional Lorenz96 model and show the significant improvements in state estimation that are possible. We also discuss the extent to which such an adaptive filter can compensate for model error, and demonstrate the use of localisation to reduce ensemble sizes for large problems.
Generalized Multicarrier CDMA: Unification and Linear Equalization
Directory of Open Access Journals (Sweden)
Wang Zhengdao
2005-01-01
Full Text Available Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC quasisynchronous CDMA system that is capable of multiuser interference (MUI elimination and intersymbol interference (ISI suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.
Adaptive feedback linearization applied to steering of ships
Directory of Open Access Journals (Sweden)
Thor I. Fossen
1993-10-01
Full Text Available This paper describes the application of feedback linearization to automatic steering of ships. The flexibility of the design procedure allows the autopilot to be optimized for both course-keeping and course-changing manoeuvres. Direct adaptive versions of both the course-keeping and turning controller are derived. The advantages of the adaptive controllers are improved performance and reduced fuel consumption. The application of nonlinear control theory also allows the designer in a systematic manner to compensate for nonlinearities in the control design.
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
An Adaptive Wavelet Method for Semi-Linear First-Order System Least Squares
Chegini, N.; Stevenson, R.
2015-01-01
We design an adaptive wavelet scheme for solving first-order system least-squares formulations of second-order elliptic PDEs that converge with the best possible rate in linear complexity. A wavelet Riesz basis is constructed for the space H⃗ 0,ΓN(div;Ω) on general polygons. The theoretical findings
Generalized Correntropy for RobustAdaptive Filtering
Chen, Badong; Xing, Lei; Zhao, Haiquan; Zheng, Nanning; Principe, Jose C.
2016-07-01
As a robust nonlinear similarity measure in kernel space, correntropy has received increasing attention in domains of machine learning and signal processing. In particular, the maximum correntropy criterion (MCC) has recently been successfully applied in robust regression and filtering. The default kernel function in correntropy is the Gaussian kernel, which is, of course, not always the best choice. In this work, we propose a generalized correntropy that adopts the generalized Gaussian density (GGD) function as the kernel (not necessarily a Mercer kernel), and present some important properties. We further propose the generalized maximum correntropy criterion (GMCC), and apply it to adaptive filtering. An adaptive algorithm, called the GMCC algorithm, is derived, and the mean square convergence performance is studied. We show that the proposed algorithm is very stable and can achieve zero probability of divergence (POD). Simulation results confirm the theoretical expectations and demonstrate the desirable performance of the new algorithm.
Generalized Linear Models with Applications in Engineering and the Sciences
Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J
2012-01-01
Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
Minimal solution of general dual fuzzy linear systems
International Nuclear Information System (INIS)
Abbasbandy, S.; Otadi, M.; Mosleh, M.
2008-01-01
Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Interpreting Hierarchical Linear and Hierarchical Generalized Linear Models with Slopes as Outcomes
Tate, Richard
2004-01-01
Current descriptions of results from hierarchical linear models (HLM) and hierarchical generalized linear models (HGLM), usually based only on interpretations of individual model parameters, are incomplete in the presence of statistically significant and practically important "slopes as outcomes" terms in the models. For complete description of…
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Downie, John D.; Goodman, Joseph W.
1989-10-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log......-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat.......-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Adaptive phase measurements in linear optical quantum computation
International Nuclear Information System (INIS)
Ralph, T C; Lund, A P; Wiseman, H M
2005-01-01
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically
About one non linear generalization of the compression reflection ...
African Journals Online (AJOL)
Both cases of stage and spiral iterations are considered. A geometrical interpretation of a convergence of a generalize method of iteration is brought, the case of stage and spiral iterations are considered. The formula for the non linear generalize compression reflection operator as a function from one variable is obtained.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
QUEST+: A general multidimensional Bayesian adaptive psychometric method.
Watson, Andrew B
2017-03-01
QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
Testing for one Generalized Linear Single Order Parameter
DEFF Research Database (Denmark)
Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe
work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation......We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... functions are identical. This assumption conflicts with some (but not all) reports, utilizing the Tool-Narayanaswamy formalism to extrapolate from non-linear measurements to the linear regime. (ii) The model predicts that the theoretical "linear Prigogine-Defay" ratio is one. This ratio has never been...
A Matrix Approach for General Higher Order Linear Recurrences
2011-01-01
properties of linear recurrences (such as the well-known Fibonacci and Pell sequences). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence. Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Real-time Adaptive Control Using Neural Generalized Predictive Control
Haley, Pam; Soloway, Don; Gold, Brian
1999-01-01
The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
Dynamic generalized linear models for monitoring endemic diseases
DEFF Research Database (Denmark)
Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq
2016-01-01
The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...
Adaptive discontinuous Galerkin methods for non-linear reactive flows
Uzunca, Murat
2016-01-01
The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.
The linear model and hypothesis a general unifying theory
Seber, George
2015-01-01
This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involve matrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality in the analysis of variance to other models, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.
An implicit spectral formula for generalized linear Schroedinger equations
International Nuclear Information System (INIS)
Schulze-Halberg, A.; Garcia-Ravelo, J.; Pena Gil, Jose Juan
2009-01-01
We generalize the semiclassical Bohr–Sommerfeld quantization rule to an exact, implicit spectral formula for linear, generalized Schroedinger equations admitting a discrete spectrum. Special cases include the position-dependent mass Schroedinger equation or the Schroedinger equation for weighted energy. Requiring knowledge of the potential and the solution associated with the lowest spectral value, our formula predicts the complete spectrum in its exact form. (author)
Algorithms for Generalized Cluster-wise Linear Regression
Park, Young Woong; Jiang, Yan; Klabjan, Diego; Williams, Loren
2016-01-01
Cluster-wise linear regression (CLR), a clustering problem intertwined with regression, is to find clusters of entities such that the overall sum of squared errors from regressions performed over these clusters is minimized, where each cluster may have different variances. We generalize the CLR problem by allowing each entity to have more than one observation, and refer to it as generalized CLR. We propose an exact mathematical programming based approach relying on column generation, a column...
Neural Generalized Predictive Control of a non-linear Process
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem....
New Implicit General Linear Method | Ibrahim | Journal of the ...
African Journals Online (AJOL)
A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.
Hierarchical Generalized Linear Models for the Analysis of Judge Ratings
Muckle, Timothy J.; Karabatsos, George
2009-01-01
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS
WEDEL, M; DESARBO, WS
1995-01-01
A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Torus quotients of homogeneous spaces of the general linear group ...
Indian Academy of Sciences (India)
... Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 119; Issue 1. Torus Quotients of Homogeneous Spaces of the General Linear Group and the Standard Representation of Certain Symmetric Groups. S S Kannan Pranab Sardar. Volume 119 Issue 1 February ...
Generalizing a categorization of students’ interpretations of linear kinematics graphs
Directory of Open Access Journals (Sweden)
Laurens Bollen
2016-02-01
Full Text Available We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven and the Basque Country, Spain (University of the Basque Country. We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
Regularization Paths for Generalized Linear Models via Coordinate Descent
Directory of Open Access Journals (Sweden)
Jerome Friedman
2010-02-01
Full Text Available We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include ℓ1 (the lasso, ℓ2 (ridge regression and mixtures of the two (the elastic net. The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J
2017-09-29
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Lossless Compression of Telemetry Information using Adaptive Linear Prediction
Directory of Open Access Journals (Sweden)
M. A. Elshafey
2014-01-01
Full Text Available Normal requirement for telemetry data compression algorithms is an ability to recover initial data “as is” without loss of information. This feature is very important in various telemetry processing applications. Precise recovery of the telemetry data as it is acquired from the original source of information is necessary for the analysis of any kind of abnormal events, recovery of bad sites within the telemetry data stream and for other types of post- or real-time data processing [1,2]. The effectiveness of methods of lossless compression is largely determined by the properties of the data under compression [3]. Compression algorithms show better compression ratios if they can adapt to the characteristics of the input data, which are in most cases rapidly change. In this paper we present the results of studies conducted to develop an efficient method of reversible telemetry data compression based on adaptive linear prediction of telemetry data packed according to IRIG-106 format. IRIG-106 is an open standard, developed specifically for aerospace industry, but now used in wide range of telemetry registration applications [4]. Data is packed to frames of fixed length and predefined internal structure. Frame can carry different sources of information: digitized samples of analog signals, as well as pure digital data. For each source a channel of the recording system is provided. The source sample in each channel is introduced by telemetry word. All words in the frame have the same bit width. Telemetry frame contains additional service information in purpose of detecting bit errors, frame synchronization, etc.Lossless data compression algorithm can be divided into two stages; the first stage - decorrelation stage, which exploits the redundancy between the neighboring samples in the data sequence, the second stage - entropy coding, which takes advantage from decreasing variance and lowering entropy of the data made on the first stage [5,6,7].
Enhanced group analysis of a semi linear generalization of a general bond-pricing equation
Bozhkov, Y.; Dimas, S.
2018-01-01
The enhanced group classification of a semi linear generalization of a general bond-pricing equation is carried out by harnessing the underlying equivalence and additional equivalence transformations. We employ that classification to unearth the particular cases with a larger Lie algebra than the general case and use them to find non trivial invariant solutions under the terminal and the barrier option condition.
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...
A Unified Bayesian Inference Framework for Generalized Linear Models
Meng, Xiangming; Wu, Sheng; Zhu, Jiang
2018-03-01
In this letter, we present a unified Bayesian inference framework for generalized linear models (GLM) which iteratively reduces the GLM problem to a sequence of standard linear model (SLM) problems. This framework provides new perspectives on some established GLM algorithms derived from SLM ones and also suggests novel extensions for some other SLM algorithms. Specific instances elucidated under such framework are the GLM versions of approximate message passing (AMP), vector AMP (VAMP), and sparse Bayesian learning (SBL). It is proved that the resultant GLM version of AMP is equivalent to the well-known generalized approximate message passing (GAMP). Numerical results for 1-bit quantized compressed sensing (CS) demonstrate the effectiveness of this unified framework.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
A general method for enclosing solutions of interval linear equations
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2012-01-01
Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012
Linear relativistic gyrokinetic equation in general magnetically confined plasmas
International Nuclear Information System (INIS)
Tsai, S.T.; Van Dam, J.W.; Chen, L.
1983-08-01
The gyrokinetic formalism for linear electromagnetic waves of arbitrary frequency in general magnetic-field configurations is extended to include full relativistic effects. The derivation employs the small adiabaticity parameter rho/L 0 where rho is the Larmor radius and L 0 the equilibrium scale length. The effects of the plasma and magnetic field inhomogeneities and finite Larmor-radii effects are also contained
Canonical perturbation theory in linearized general relativity theory
International Nuclear Information System (INIS)
Gonzales, R.; Pavlenko, Yu.G.
1986-01-01
Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field
General treatment of a non-linear gauge condition
International Nuclear Information System (INIS)
Malleville, C.
1982-06-01
A non linear gauge condition is presented in the frame of a non abelian gauge theory broken with the Higgs mechanism. It is shown that this condition already introduced for the standard SU(2) x U(1) model can be generalized for any gauge model with the same type of simplification, namely the suppression of any coupling of the form: massless gauge boson, massive gauge boson, unphysical Higgs [fr
Non-linear, adaptive array processing for acoustic interference suppression.
Hoppe, Elizabeth; Roan, Michael
2009-06-01
A method is introduced where blind source separation of acoustical sources is combined with spatial processing to remove non-Gaussian, broadband interferers from space-time displays such as bearing track recorder displays. This differs from most standard techniques such as generalized sidelobe cancellers in that the separation of signals is not done spatially. The algorithm performance is compared to adaptive beamforming techniques such as minimum variance distortionless response beamforming. Simulations and experiments using two acoustic sources were used to verify the performance of the algorithm. Simulations were also used to determine the effectiveness of the algorithm under various signal to interference, signal to noise, and array geometry conditions. A voice activity detection algorithm was used to benchmark the performance of the source isolation.
Adaptive fuzzy bilinear observer based synchronization design for generalized Lorenz system
International Nuclear Information System (INIS)
Baek, Jaeho; Lee, Heejin; Kim, Seungwoo; Park, Mignon
2009-01-01
This Letter proposes an adaptive fuzzy bilinear observer (FBO) based synchronization design for generalized Lorenz system (GLS). The GLS can be described to TS fuzzy bilinear generalized Lorenz model (FBGLM) with their states immeasurable and their parameters unknown. We design an adaptive FBO based on TS FBGLM for synchronization. Lyapunov theory is employed to guarantee the stability of error dynamic system via linear matrix equalities (LMIs) and to derive the adaptive laws to estimate unknown parameters. Numerical example is given to demonstrate the validity of our proposed adaptive FBO approach for synchronization.
DEFF Research Database (Denmark)
Porto da Silva, Edson; Zibar, Darko
2016-01-01
Simple analytical widely linear complex-valued models for IQ-imbalance and IQ-skew effects in multicarrier transmitters are presented. To compensate for such effects, a 4×4 MIMO widely linear adaptive equalizer is proposed and experimentally validated.......Simple analytical widely linear complex-valued models for IQ-imbalance and IQ-skew effects in multicarrier transmitters are presented. To compensate for such effects, a 4×4 MIMO widely linear adaptive equalizer is proposed and experimentally validated....
Weight Smoothing for Generalized Linear Models Using a Laplace Prior
Xia, Xi; Elliott, Michael R.
2017-01-01
When analyzing data sampled with unequal inclusion probabilities, correlations between the probability of selection and the sampled data can induce bias if the inclusion probabilities are ignored in the analysis. Weights equal to the inverse of the probability of inclusion are commonly used to correct possible bias. When weights are uncorrelated with the descriptive or model estimators of interest, highly disproportional sample designs resulting in large weights can introduce unnecessary variability, leading to an overall larger mean square error compared to unweighted methods. We describe an approach we term ‘weight smoothing’ that models the interactions between the weights and the estimators as random effects, reducing the root mean square error (RMSE) by shrinking interactions toward zero when such shrinkage is allowed by the data. This article adapts a flexible Laplace prior distribution for the hierarchical Bayesian model to gain a more robust bias-variance tradeoff than previous approaches using normal priors. Simulation and application suggest that under a linear model setting, weight-smoothing models with Laplace priors yield robust results when weighting is necessary, and provide considerable reduction in RMSE otherwise. In logistic regression models, estimates using weight-smoothing models with Laplace priors are robust, but with less gain in efficiency than in linear regression settings. PMID:29225401
Classification images and bubbles images in the generalized linear model.
Murray, Richard F
2012-07-09
Classification images and bubbles images are psychophysical tools that use stimulus noise to investigate what features people use to make perceptual decisions. Previous work has shown that classification images can be estimated using the generalized linear model (GLM), and here I show that this is true for bubbles images as well. Expressing the two approaches in terms of a single statistical model clarifies their relationship to one another, makes it possible to measure classification images and bubbles images simultaneously, and allows improvements developed for one method to be used with the other.
Generalized constraint neural network regression model subject to linear priors.
Qu, Ya-Jun; Hu, Bao-Gang
2011-12-01
This paper is reports an extension of our previous investigations on adding transparency to neural networks. We focus on a class of linear priors (LPs), such as symmetry, ranking list, boundary, monotonicity, etc., which represent either linear-equality or linear-inequality priors. A generalized constraint neural network-LPs (GCNN-LPs) model is studied. Unlike other existing modeling approaches, the GCNN-LP model exhibits its advantages. First, any LP is embedded by an explicitly structural mode, which may add a higher degree of transparency than using a pure algorithm mode. Second, a direct elimination and least squares approach is adopted to study the model, which produces better performances in both accuracy and computational cost over the Lagrange multiplier techniques in experiments. Specific attention is paid to both "hard (strictly satisfied)" and "soft (weakly satisfied)" constraints for regression problems. Numerical investigations are made on synthetic examples as well as on the real-world datasets. Simulation results demonstrate the effectiveness of the proposed modeling approach in comparison with other existing approaches.
Generalized space and linear momentum operators in quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Costa, Bruno G. da, E-mail: bruno.costa@ifsertao-pe.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Sertão Pernambucano, Campus Petrolina, BR 407, km 08, 56314-520 Petrolina, Pernambuco (Brazil); Instituto de Física, Universidade Federal da Bahia, R. Barão de Jeremoabo s/n, 40170-115 Salvador, Bahia (Brazil); Borges, Ernesto P., E-mail: ernesto@ufba.br [Instituto de Física, Universidade Federal da Bahia, R. Barão de Jeremoabo s/n, 40170-115 Salvador, Bahia (Brazil)
2014-06-15
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems
Directory of Open Access Journals (Sweden)
Xinbo Zhang
2014-01-01
Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.
Generalized linear mixed model for segregation distortion analysis.
Zhan, Haimao; Xu, Shizhong
2011-11-11
Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.
Adaptive Linear Parameter Varying Control for Aeroservoelastic Suppression, Phase II
National Aeronautics and Space Administration — Adaptive control offers an opportunity to fulfill aircraft safety objectives though automated vehicle recovery while maintaining performance and stability...
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) ... the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized ..... following optimization parameters are needed: ⎧.
dglars: An R Package to Estimate Sparse Generalized Linear Models
Directory of Open Access Journals (Sweden)
Luigi Augugliaro
2014-09-01
Full Text Available dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013, developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004. The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013, and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012. The latter algorithm, as shown here, is significantly faster than the predictor-corrector algorithm. For comparison purposes, we have implemented both algorithms.
Analysis of Robust Quasi-deviances for Generalized Linear Models
Directory of Open Access Journals (Sweden)
Eva Cantoni
2004-04-01
Full Text Available Generalized linear models (McCullagh and Nelder 1989 are a popular technique for modeling a large variety of continuous and discrete data. They assume that the response variables Yi , for i = 1, . . . , n, come from a distribution belonging to the exponential family, such that E[Yi ] = ?i and V[Yi ] = V (?i , and that ?i = g(?i = xiT?, where ? ? IR p is the vector of parameters, xi ? IR p, and g(. is the link function. The non-robustness of the maximum likelihood and the maximum quasi-likelihood estimators has been studied extensively in the literature. For model selection, the classical analysis-of-deviance approach shares the same bad robustness properties. To cope with this, Cantoni and Ronchetti (2001 propose a robust approach based on robust quasi-deviance functions for estimation and variable selection. We refer to that paper for a deeper discussion and the review of the literature.
Mixed Task and Data Parallel Executions in General Linear Methods
Directory of Open Access Journals (Sweden)
Thomas Rauber
2007-01-01
Full Text Available On many parallel target platforms it can be advantageous to implement parallel applications as a collection of multiprocessor tasks that are concurrently executed and are internally implemented with fine-grain SPMD parallelism. A class of applications which can benefit from this programming style are methods for solving systems of ordinary differential equations. Many recent solvers have been designed with an additional potential of method parallelism, but the actual effectiveness of mixed task and data parallelism depends on the specific communication and computation requirements imposed by the equation to be solved. In this paper we study mixed task and data parallel implementations for general linear methods realized using a library for multiprocessor task programming. Experiments on a number of different platforms show good efficiency results.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Adaptive Non-linear Control of Hydraulic Actuator Systems
DEFF Research Database (Denmark)
Hansen, Poul Erik; Conrad, Finn
1998-01-01
Presentation of two new developed adaptive non-liner controllers for hydraulic actuator systems to give stable operation and improved performance.Results from the IMCIA project supported by the Danish Technical Research Council (STVF).......Presentation of two new developed adaptive non-liner controllers for hydraulic actuator systems to give stable operation and improved performance.Results from the IMCIA project supported by the Danish Technical Research Council (STVF)....
Adaptive Kronrod-Patterson integration of non-linear finite-element matrices
DEFF Research Database (Denmark)
Janssen, Hans
2010-01-01
inappropriate discretization. In response, this article develops adaptive integration, based on nested Kronrod-Patterson-Gauss integration schemes: basically, the integration order is adapted to the locally observed grade of non-linearity. Adaptive integration is developed based on a standard infiltration...
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multivariate statistical modelling based on generalized linear models
Fahrmeir, Ludwig
1994-01-01
This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua
2010-06-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
Generalized linear isotherm regularity equation of state applied to metals
Directory of Open Access Journals (Sweden)
H. Sun
2012-03-01
Full Text Available A three-parameter equation of state (EOS without physically incorrect oscillations is proposed based on the generalized Lennard-Jones (GLJ potential and the approach in developing linear isotherm regularity (LIR EOS of Parsafar and Mason [J. Phys. Chem., 1994, 49, 3049]. The proposed (GLIR EOS can include the LIR EOS therein as a special case. The three-parameter GLIR, Parsafar and Mason (PM [Phys. Rev. B, 1994, 49, 3049], Shanker, Singh and Kushwah (SSK [Physica B, 1997, 229, 419], Parsafar, Spohr and Patey (PSP [J. Phys. Chem. B, 2009, 113, 11980], and reformulated PM and SSK EOSs are applied to 30 metallic solids within wide pressure ranges. It is shown that the PM, PMR and PSP EOSs for most solids, and the SSK and SSKR EOSs for several solids, have physically incorrect turning points, and pressure becomes negative at high enough pressure. The GLIR EOS is capable not only of overcoming the problem existing in other five EOSs where the pressure becomes negative at high pressure, but also gives results superior to other EOSs
Detection of Fraudulent Transactions Through a Generalized Mixed Linear Models
Directory of Open Access Journals (Sweden)
Jackelyne Gómez–Restrepo
2012-12-01
Full Text Available The detection of bank frauds is a topic which many financial sector companieshave invested time and resources into. However, finding patterns inthe methodologies used to commit fraud in banks is a job that primarily involvesintimate knowledge of customer behavior, with the idea of isolatingthose transactions which do not correspond to what the client usually does.Thus, the solutions proposed in literature tend to focus on identifying outliersor groups, but fail to analyse each client or forecast fraud. This paperevaluates the implementation of a generalized linear model to detect fraud.With this model, unlike conventional methods, we consider the heterogeneityof customers. We not only generate a global model, but also a model for eachcustomer which describes the behavior of each one according to their transactionalhistory and previously detected fraudulent transactions. In particular,a mixed logistic model is used to estimate the probability that a transactionis fraudulent, using information that has been taken by the banking systemsin different moments of time.
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali
2016-09-06
The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.
Adaptive Generation and Diagnostics of Linear Few-Cycle Light Bullets
Directory of Open Access Journals (Sweden)
Martin Bock
2013-02-01
Full Text Available Recently we introduced the class of highly localized wavepackets (HLWs as a generalization of optical Bessel-like needle beams. Here we report on the progress in this field. In contrast to pulsed Bessel beams and Airy beams, ultrashort-pulsed HLWs propagate with high stability in both spatial and temporal domain, are nearly paraxial (supercollimated, have fringe-less spatial profiles and thus represent the best possible approximation to linear “light bullets”. Like Bessel beams and Airy beams, HLWs show self-reconstructing behavior. Adaptive HLWs can be shaped by ultraflat three-dimensional phase profiles (generalized axicons which are programmed via calibrated grayscale maps of liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs. Light bullets of even higher complexity can either be freely formed from quasi-continuous phase maps or discretely composed from addressable arrays of identical nondiffracting beams. The characterization of few-cycle light bullets requires spatially resolved measuring techniques. In our experiments, wavefront, pulse and phase were detected with a Shack-Hartmann wavefront sensor, 2D-autocorrelation and spectral phase interferometry for direct electric-field reconstruction (SPIDER. The combination of the unique propagation properties of light bullets with the flexibility of adaptive optics opens new prospects for applications of structured light like optical tweezers, microscopy, data transfer and storage, laser fusion, plasmon control or nonlinear spectroscopy.
Miao, Xiu-feng; Li, Long-suo; Yan, Xiu-ming
2014-11-01
This paper is concerned with adaptive observer design problem for a class of nonlinear stochastic systems. Unknown constant parameters are assumed to be norm bounded. In order to better use the structural knowledge of the nonlinear part, a generalized Lipschitz condition is introduced to the adaptive observer design for a class of nonlinear stochastic systems for the first time. Based on a Lyapunov-Krasovskii functional approach and stochastic Lyapunov stability theory, we present a new adaptive observer design condition with ultimately exponentially bounded in sense of mean square for errors systems in terms of linear matrix inequality (LMI). A numerical example is exploited to show the validity and feasibility of the results.
Non-linear and adaptive control of a refrigeration system
DEFF Research Database (Denmark)
Rasmussen, Henrik; Larsen, Lars F. S.
2011-01-01
are capable of adapting to variety of systems. This paper proposes a novel method for superheat and capacity control of refrigeration systems; namely by controlling the superheat by the compressor speed and capacity by the refrigerant flow. A new low order nonlinear model of the evaporator is developed......In a refrigeration process heat is absorbed in an evaporator by evaporating a flow of liquid refrigerant at low pressure and temperature. Controlling the evaporator inlet valve and the compressor in such a way that a high degree of liquid filling in the evaporator is obtained at all compressor...... capacities ensures a high energy efficiency. The level of liquid filling is indirectly measured by the superheat. Introduction of variable speed compressors and electronic expansion valves enables the use of more sophisticated control algorithms, giving a higher degree of performance and just as important...
Cavity characterization for general use in linear electron accelerators
International Nuclear Information System (INIS)
Souza Neto, M.V. de.
1985-01-01
The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt
A generalized adaptive mathematical morphological filter for LIDAR data
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in
An assessment of estimation methods for generalized linear mixed models with binary outcomes.
Capanu, Marinela; Gönen, Mithat; Begg, Colin B
2013-11-20
Two main classes of methodology have been developed for addressing the analytical intractability of generalized linear mixed models: likelihood-based methods and Bayesian methods. Likelihood-based methods such as the penalized quasi-likelihood approach have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods using adaptive Gaussian quadrature perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Bayesian methods, although they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article, we introduce a modification of the hybrid approach of Capanu and Begg, 2011, Biometrics 67, 371-380, as a bridge between the likelihood-based and Bayesian approaches by employing Bayesian estimation for the variance components followed by Laplacian estimation for the regression coefficients. We investigate its performance as well as that of several likelihood-based methods in the setting of generalized linear mixed models with binary outcomes. We apply the methods to three datasets and conduct simulations to illustrate their properties. Simulation results indicate that for moderate to large numbers of observations per random effect, adaptive Gaussian quadrature and the Laplacian approximation are very accurate, with adaptive Gaussian quadrature preferable as the number of observations per random effect increases. The hybrid approach is overall similar to the Laplace method, and it can be superior for data with very sparse random effects. Copyright © 2013 John Wiley & Sons, Ltd.
Implications of plan-based generalization in sensorimotor adaptation.
McDougle, Samuel D; Bond, Krista M; Taylor, Jordan A
2017-07-01
Generalization is a fundamental aspect of behavior, allowing for the transfer of knowledge from one context to another. The details of this transfer are thought to reveal how the brain represents what it learns. Generalization has been a central focus in studies of sensorimotor adaptation, and its pattern has been well characterized: Learning of new dynamic and kinematic transformations in one region of space tapers off in a Gaussian-like fashion to neighboring untrained regions, echoing tuned population codes in the brain. In contrast to common allusions to generalization in cognitive science, generalization in visually guided reaching is usually framed as a passive consequence of neural tuning functions rather than a cognitive feature of learning. While previous research has presumed that maximum generalization occurs at the instructed task goal or the actual movement direction, recent work suggests that maximum generalization may occur at the location of an explicitly accessible movement plan. Here we provide further support for plan-based generalization, formalize this theory in an updated model of adaptation, and test several unexpected implications of the model. First, we employ a generalization paradigm to parameterize the generalization function and ascertain its maximum point. We then apply the derived generalization function to our model and successfully simulate and fit the time course of implicit adaptation across three behavioral experiments. We find that dynamics predicted by plan-based generalization are borne out in the data, are contrary to what traditional models predict, and lead to surprising implications for the behavioral, computational, and neural characteristics of sensorimotor adaptation. NEW & NOTEWORTHY The pattern of generalization is thought to reveal how the motor system represents learned actions. Recent work has made the intriguing suggestion that maximum generalization in sensorimotor adaptation tasks occurs at the location of the
A general maximum likelihood analysis of variance components in generalized linear models.
Aitkin, M
1999-03-01
This paper describes an EM algorithm for nonparametric maximum likelihood (ML) estimation in generalized linear models with variance component structure. The algorithm provides an alternative analysis to approximate MQL and PQL analyses (McGilchrist and Aisbett, 1991, Biometrical Journal 33, 131-141; Breslow and Clayton, 1993; Journal of the American Statistical Association 88, 9-25; McGilchrist, 1994, Journal of the Royal Statistical Society, Series B 56, 61-69; Goldstein, 1995, Multilevel Statistical Models) and to GEE analyses (Liang and Zeger, 1986, Biometrika 73, 13-22). The algorithm, first given by Hinde and Wood (1987, in Longitudinal Data Analysis, 110-126), is a generalization of that for random effect models for overdispersion in generalized linear models, described in Aitkin (1996, Statistics and Computing 6, 251-262). The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully nonparametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters can be sensitive to the specification of a parametric form for the mixing distribution. The nonparametric analysis can be extended straightforwardly to general random parameter models, with full NPML estimation of the joint distribution of the random parameters. This can produce substantial computational saving compared with full numerical integration over a specified parametric distribution for the random parameters. A simple method is described for obtaining correct standard errors for parameter estimates when using the EM algorithm. Several examples are discussed involving simple variance component and longitudinal models, and small-area estimation.
A General Linear Method for Equating with Small Samples
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Single image super-resolution using locally adaptive multiple linear regression.
Yu, Soohwan; Kang, Wonseok; Ko, Seungyong; Paik, Joonki
2015-12-01
This paper presents a regularized superresolution (SR) reconstruction method using locally adaptive multiple linear regression to overcome the limitation of spatial resolution of digital images. In order to make the SR problem better-posed, the proposed method incorporates the locally adaptive multiple linear regression into the regularization process as a local prior. The local regularization prior assumes that the target high-resolution (HR) pixel is generated by a linear combination of similar pixels in differently scaled patches and optimum weight parameters. In addition, we adapt a modified version of the nonlocal means filter as a smoothness prior to utilize the patch redundancy. Experimental results show that the proposed algorithm better restores HR images than existing state-of-the-art methods in the sense of the most objective measures in the literature.
Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm
Jansen, R.C.
A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical
Application of generalized linear models to estimate height growth
Hess, Andre Felipe; Cianorschi, Lucas; Silvestre, Raul; Scariot, Rafael; Ricken, Pollyni
2015-01-01
A análise do crescimento em altura é de extrema importância na área florestal, pois expressa a capacidade produtiva do local. Seu uso está associado ao ajuste, com menor erro, dos modelos para gerar estimativas que permitam a inferência com precisão e confiabilidade. O presente trabalho analisou o emprego dos modelos lineares generalizados na predição do crescimento em altura de Pinus taeda L. em função da idade e diâmetro a 1,30 m de altura em povoamentos no planalto catarinense. Os dados ut...
Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control
DEFF Research Database (Denmark)
Zhou, Jianjun; Conrad, Finn
1989-01-01
The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input...
Automated Clutch of AMT Vehicle Based on Adaptive Generalized Minimum Variance Controller
Directory of Open Access Journals (Sweden)
Ze Li
2014-11-01
Full Text Available Due to the influence of non-linear dynamic characteristic of clutch, external disturbance and parameter variation, the automated clutch is hard to control precisely during the engaging process of the automated clutch of automatic mechanical transmission vehicle. In this paper, adaptive generalized minimum variance controller is applied to the automated clutch which is driven by a brushless DC motor. The simulation results showed that the proposed controller is effective and robust to the parametric variation and external disturbance.
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... Abstract. A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master–slave system ...
Nguyen, Nhan
2013-01-01
This paper presents the optimal control modification for linear uncertain plants. The Lyapunov analysis shows that the modification parameter has a limiting value depending on the nature of the uncertainty. The optimal control modification exhibits a linear asymptotic property that enables it to be analyzed in a linear time invariant framework for linear uncertain plants. The linear asymptotic property shows that the closed-loop plants in the limit possess a scaled input-output mapping. Using this property, we can derive an analytical closed-loop transfer function in the limit as the adaptive gain tends to infinity. The paper revisits the Rohrs counterexample problem that illustrates the nature of non-robustness of model-reference adaptive control in the presence of unmodeled dynamics. An analytical approach is developed to compute exactly the modification parameter for the optimal control modification that stabilizes the plant in the Rohrs counterexample. The linear asymptotic property is also used to address output feedback adaptive control for non-minimum phase plants with a relative degree 1.
A general algorithm for computing distance transforms in linear time
Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS
2000-01-01
A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the
Generalized Heisenberg algebra and (non linear) pseudo-bosons
Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.
2018-04-01
We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.
Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.
2014-01-01
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786
Generalized projective synchronization of chaotic systems via adaptive learning control
International Nuclear Information System (INIS)
Yun-Ping, Sun; Jun-Min, Li; Hui-Lin, Wang; Jiang-An, Wang
2010-01-01
In this paper, a learning control approach is applied to the generalized projective synchronisation (GPS) of different chaotic systems with unknown periodically time-varying parameters. Using the Lyapunov–Krasovskii functional stability theory, a differential-difference mixed parametric learning law and an adaptive learning control law are constructed to make the states of two different chaotic systems asymptotically synchronised. The scheme is successfully applied to the generalized projective synchronisation between the Lorenz system and Chen system. Moreover, numerical simulations results are used to verify the effectiveness of the proposed scheme. (general)
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Directory of Open Access Journals (Sweden)
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Adaptive image contrast enhancement using generalizations of histogram equalization.
Stark, J A
2000-01-01
This paper proposes a scheme for adaptive image-contrast enhancement based on a generalization of histogram equalization (HE). HE is a useful technique for improving image contrast, but its effect is too severe for many purposes. However, dramatically different results can be obtained with relatively minor modifications. A concise description of adaptive HE is set out, and this framework is used in a discussion of past suggestions for variations on HE. A key feature of this formalism is a "cumulation function," which is used to generate a grey level mapping from the local histogram. By choosing alternative forms of cumulation function one can achieve a wide variety of effects. A specific form is proposed. Through the variation of one or two parameters, the resulting process can produce a range of degrees of contrast enhancement, at one extreme leaving the image unchanged, at another yielding full adaptive equalization.
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
The Morava E-theories of finite general linear groups
Mattafirri, Sara
block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Mielikainen, Jarno
2010-08-01
This paper extends clustered differential pulse code modulation (C-DPCM) lossless compression method for hyperspectral images. In C-DPCM method the spectra of a hyperspectral image is clustered, and an optimized predictor is calculated for each cluster. Prediction is performed using a linear predictor. After prediction, the difference between the predicted and original values is computed. The difference is entropy-coded using an adaptive entropy coder for each cluster. The proposed use of adaptive prediction length is shown have lower bits/pixel value than the original C-DPCM method for new AVIRIS test images. Both calibrated are uncalibrated images showed improvement over fixed prediction length.
Lei, Meizhen; Wang, Liqiang
2018-01-01
The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.
International Nuclear Information System (INIS)
Barr, D.S.
1992-01-01
It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems
International Nuclear Information System (INIS)
Barr, D.S.
1993-01-01
It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems
Fast Linear Adaptive Skipping Training Algorithm for Training Artificial Neural Network
Manjula Devi, R.; Kuppuswami, S.; Suganthe, R. C.
2013-01-01
Artificial neural network has been extensively consumed training model for solving pattern recognition tasks. However, training a very huge training data set using complex neural network necessitates excessively high training time. In this correspondence, a new fast Linear Adaptive Skipping Training (LAST) algorithm for training artificial neural network (ANN) is instituted. The core essence of this paper is to ameliorate the training speed of ANN by exhibiting only the input samples that do ...
Sensitivity theory for general non-linear algebraic equations with constraints
International Nuclear Information System (INIS)
Oblow, E.M.
1977-04-01
Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems
The Generalized Logit-Linear Item Response Model for Binary-Designed Items
Revuelta, Javier
2008-01-01
This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…
Adaptive matching of the iota ring linear optics for space charge compensation
Energy Technology Data Exchange (ETDEWEB)
Romanov, A. [Fermilab; Bruhwiler, D. L. [RadiaSoft, Boulder; Cook, N. [RadiaSoft, Boulder; Hall, C. [RadiaSoft, Boulder
2016-10-09
Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a search for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters
Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng
2018-03-01
Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Generalized linear mixed models can detect unimodal species-environment relationships.
Jamil, Tahira; Ter Braak, Cajo J F
2013-01-01
Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in ordination, with trait modulated response and when species phylogeny and species traits must be taken into account. Adding squared terms to a linear model is a possibility but gives uninterpretable parameters. This paper explains why and when generalized linear mixed models, even without squared terms, can effectively analyse unimodal data and also presents a graphical tool and statistical test to test for unimodal response while fitting just the generalized linear mixed model. The R-code for this is supplied in Supplemental Information 1.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Generalization of adaptive neuro-fuzzy inference systems.
Azeem, M F; Hanmandlu, M; Ahmad, N
2000-01-01
The paper aims at several objectives. The adaptive network-based fuzzy inference systems (ANFIS) of Jang is extended to the generalized ANFIS (GANFIS) by proposing a generalized fuzzy model (GFM) and considering a generalized radial basis function (GRBF) network. The GFM encompasses both the Takagi-Sugeno (TS)-model and the compositional rule of inference (CRI)-model. A local model, a property of TS-model, and the index of fuzziness, a property of CRI-model define the consequent part of a rule of GFM. The conditions by which the proposed GFM converts to TS-model or the CRI-model are presented. The basis function in GRBF is a generalized Gaussian function of three parameters. The architecture of the GRBF network is devised to learn the parameters of GFM, since it has been proved in this paper that GRBF network and GFM are functionally equivalent. It is shown that GRBF network can be reduced to either the standard RBF or the Hunt's RBF network. The issue of the normalized versus the nonnormalized GRBF networks is investigated in the context of GANFIS. An interesting property of symmetry on the error surface of GRBF network is investigated in the present work. The proposed GANFIS is applied for the modeling of a multivariable system like stock market.
Design of Attitude Control System for UAV Based on Feedback Linearization and Adaptive Control
Directory of Open Access Journals (Sweden)
Wenya Zhou
2014-01-01
Full Text Available Attitude dynamic model of unmanned aerial vehicles (UAVs is multi-input multioutput (MIMO, strong coupling, and nonlinear. Model uncertainties and external gust disturbances should be considered during designing the attitude control system for UAVs. In this paper, feedback linearization and model reference adaptive control (MRAC are integrated to design the attitude control system for a fixed wing UAV. First of all, the complicated attitude dynamic model is decoupled into three single-input single-output (SISO channels by input-output feedback linearization. Secondly, the reference models are determined, respectively, according to the performance indexes of each channel. Subsequently, the adaptive control law is obtained using MRAC theory. In order to demonstrate the performance of attitude control system, the adaptive control law and the proportional-integral-derivative (PID control law are, respectively, used in the coupling nonlinear simulation model. Simulation results indicate that the system performance indexes including maximum overshoot, settling time (2% error range, and rise time obtained by MRAC are better than those by PID. Moreover, MRAC system has stronger robustness with respect to the model uncertainties and gust disturbance.
Zhang, Ruikun; Hou, Zhongsheng; Ji, Honghai; Yin, Chenkun
2016-04-01
In this paper, an adaptive iterative learning control scheme is proposed for a class of non-linearly parameterised systems with unknown time-varying parameters and input saturations. By incorporating a saturation function, a new iterative learning control mechanism is presented which includes a feedback term and a parameter updating term. Through the use of parameter separation technique, the non-linear parameters are separated from the non-linear function and then a saturated difference updating law is designed in iteration domain by combining the unknown parametric term of the local Lipschitz continuous function and the unknown time-varying gain into an unknown time-varying function. The analysis of convergence is based on a time-weighted Lyapunov-Krasovskii-like composite energy function which consists of time-weighted input, state and parameter estimation information. The proposed learning control mechanism warrants a L2[0, T] convergence of the tracking error sequence along the iteration axis. Simulation results are provided to illustrate the effectiveness of the adaptive iterative learning control scheme.
Symmetry Adaptation of the Rotation-Vibration Theory for Linear Molecules
Directory of Open Access Journals (Sweden)
Katy L. Chubb
2018-04-01
Full Text Available A numerical application of linear-molecule symmetry properties, described by the D ∞ h point group, is formulated in terms of lower-order symmetry groups D n h with finite n. Character tables and irreducible representation transformation matrices are presented for D n h groups with arbitrary n-values. These groups can subsequently be used in the construction of symmetry-adapted ro-vibrational basis functions for solving the Schrödinger equations of linear molecules. Their implementation into the symmetrisation procedure based on a set of “reduced” vibrational eigenvalue problems with simplified Hamiltonians is used as a practical example. It is shown how the solutions of these eigenvalue problems can also be extended to include the classification of basis-set functions using ℓ, the eigenvalue (in units of ℏ of the vibrational angular momentum operator L ^ z . This facilitates the symmetry adaptation of the basis set functions in terms of the irreducible representations of D n h . 12 C 2 H 2 is used as an example of a linear molecule of D ∞ h point group symmetry to illustrate the symmetrisation procedure of the variational nuclear motion program Theoretical ROVibrational Energies (TROVE.
Adaptive H∞ nonlinear velocity tracking using RBFNN for linear DC brushless motor
Tsai, Ching-Chih; Chan, Cheng-Kain; Li, Yi Yu
2012-01-01
This article presents an adaptive H ∞ nonlinear velocity control for a linear DC brushless motor. A simplified model of this motor with friction is briefly recalled. The friction dynamics is described by the Lu Gre model and the online tuning radial basis function neural network (RBFNN) is used to parameterise the nonlinear friction function and un-modelled errors. An adaptive nonlinear H ∞ control method is then proposed to achieve velocity tracking, by assuming that the upper bounds of the ripple force, the changeable load and the nonlinear friction can be learned by the RBFNN. The closed-loop system is proven to be uniformly bounded using the Lyapunov stability theory. The feasibility and the efficacy of the proposed control are exemplified by conducting two velocity tracking experiments.
Directory of Open Access Journals (Sweden)
Yunfeng Wu
2014-01-01
Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.
Directory of Open Access Journals (Sweden)
Sidra Mumtaz
2018-03-01
Full Text Available In the current smart grid scenario, the evolution of a proficient and robust maximum power point tracking (MPPT algorithm for a PV subsystem has become imperative due to the fluctuating meteorological conditions. In this paper, an adaptive feedback linearization-based NeuroFuzzy MPPT (AFBLNF-MPPT algorithm for a photovoltaic (PV subsystem in a grid-integrated hybrid renewable energy system (HRES is proposed. The performance of the stated (AFBLNF-MPPT control strategy is approved through a comprehensive grid-tied HRES test-bed established in MATLAB/Simulink. It outperforms the incremental conductance (IC based adaptive indirect NeuroFuzzy (IC-AIndir-NF control scheme, IC-based adaptive direct NeuroFuzzy (IC-ADir-NF control system, IC-based adaptive proportional-integral-derivative (IC-AdapPID control scheme, and conventional IC algorithm for a PV subsystem in both transient as well as steady-state modes for varying temperature and irradiance profiles. The comparative analyses were carried out on the basis of performance indexes and efficiency of MPPT.
A speed estimation unit for induction motors based on adaptive linear combiner
International Nuclear Information System (INIS)
Marei, Mostafa I.; Shaaban, Mostafa F.; El-Sattar, Ahmed A.
2009-01-01
This paper presents a new induction motor speed estimation technique, which can estimate the rotor resistance as well, from the measured voltage and current signals. Moreover, the paper utilizes a novel adaptive linear combiner (ADALINE) structure for speed and rotor resistance estimations. This structure can deal with the multi-output systems and it is called MO-ADALINE. The model of the induction motor is arranged in a linear form, in the stationary reference frame, to cope with the proposed speed estimator. There are many advantages of the proposed unit such as wide speed range capability, immunity against harmonics of measured waveforms, and precise estimation of the speed and the rotor resistance at different dynamic changes. Different types of induction motor drive systems are used to evaluate the dynamic performance and to examine the accuracy of the proposed unit for speed and rotor resistance estimation.
Linear-quadratic-Gaussian control for adaptive optics systems using a hybrid model.
Looze, Douglas P
2009-01-01
This paper presents a linear-quadratic-Gaussian (LQG) design based on the equivalent discrete-time model of an adaptive optics (AO) system. The design model incorporates deformable mirror dynamics, an asynchronous wavefront sensor and zero-order hold operation, and a continuous-time model of the incident wavefront. Using the structure of the discrete-time model, the dimensions of the Riccati equations to be solved are reduced. The LQG controller is shown to improve AO system performance under several conditions.
A model following inverse controller with adaptive compensation for General Aviation aircraft
Bruner, Hugh S.
The theory for an adaptive inverse flight controller, suitable for use on General Aviation aircraft, is developed in this research. The objectives of this controller are to separate the normally coupled modes of the basic aircraft and thereby permit direct control of airspeed and flight-path angle, meet prescribed performance characteristics as defined by damping ratio and natural frequency, adapt to uncertainties in the physical plant, and be computationally efficient. The three basic elements of the controller are a linear prefilter, an inverse transfer function, and an adaptive neural network compensator. The linear prefilter shapes accelerations required of the overall system in order to achieve the desired system performance characteristics. The inverse transfer function is used to compute the aircraft control inputs required to achieve the necessary accelerations. The adaptive neural network compensator is used to compensate for modeling errors during design or real-time changes in the physical plant. This architecture is patterned after the work of Calise, but differs by not requiring dynamic feedback of the state variables. The controller is coded in ANSI C and integrated with a simulation of a typical General Aviation aircraft. Twenty-three cases are simulated to prove that the objectives for the controller are met. Among these cases are simulated stability and controllability failures in the physical plant, as well as several simulated failures of the neural network. With the exception of some bounded speed-tracking error, the controller is capable of continued flight with any foreseeable failure of the neural network. Recommendations are provided for follow-on investigations by other researchers.
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Downie, John D.
1990-01-01
A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Schluchter, Mark D.
2008-01-01
In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…
ADAPTIVE OUTPUT CONTROL OF MULTICHANNEL LINEAR STATIONARY SYSTEMS UNDER PARAMETRIC UNCERTAINTY
Directory of Open Access Journals (Sweden)
Aleksei A. Bobtsov
2014-11-01
Full Text Available The paper deals with the problem of adaptive control for multi-channel linear stationary plants under parametric uncertainty with arbitrary relative degree of each local subsystem. The synthesized regulator provides stabilization of control plant on condition that for each local subsystem only output variables are measured with known relative degrees, but the order of linear differential equations is unknown. We consider the synthesis of control system for two-channel system for simplification of the synthesis method. The "serial compensator" algorithm is chosen as basic approach with A.L. Fradkov's passification theorem and additional filters containing high gain constants in their structure. Durability of the closed system in the group of pointed types of regulators is analyzed and the necessary and sufficient conditions for exponential convergence properties are considered. We suggest adaptive version of the "serial compensator" method from the practical point of view, where customization of the gain constant is based on the integral type algorithm. We show the results of computer simulation for the third and second order subsystems under parametric uncertainty to illustrate the proposed approach workability. It is shown that the proposed technique makes it possible to synthesize control algorithms for multichannel systems under parametric uncertainty with minimal dynamical order as compared to known foreign and domestic counterparts.
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Czech Academy of Sciences Publication Activity Database
Náhlík, Luboš; Šestáková, L.; Hutař, Pavel; Knésl, Zdeněk
2011-01-01
Roč. 452-453, - (2011), s. 445-448 ISSN 1013-9826 R&D Projects: GA AV ČR(CZ) KJB200410803; GA ČR GA101/09/1821 Institutional research plan: CEZ:AV0Z20410507 Keywords : generalized stress intensity factor * bimaterial interface * composite materials * strain energy density factor * fracture criterion * generalized linear elastic fracture mechanics Subject RIV: JL - Materials Fatigue, Friction Mechanics
General linear methods and friends: Toward efficient solutions of multiphysics problems
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
GLIMMIX : Software for estimating mixtures and mixtures of generalized linear models
Wedel, M
2001-01-01
GLIMMIX is a commercial WINDOWS-based computer program that implements the EM algorithm (Dempster, Laird and Rubin 1977) for the estimation of finite mixtures and mixtures of generalized linear models. The program allows for the specification of a number of distributions in the exponential family,
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
DEFF Research Database (Denmark)
Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf
2017-01-01
observations from Germany. It is shown that estimated marginal effects of a number of covariates are sizeably affected by misclassification and missing values in the analysis data. The proposed generalized partially linear regression extends existing models by allowing a misclassified discrete covariate...
On the distribution of discounted loss reserves using generalized linear models
Hoedemakers, T.; Beirlant, J.; Goovaerts, M.J.; Dhaene, J.
2005-01-01
Renshaw and Verrall [11] specified the generalized linear model (GLM) underlying the chain-ladder technique and suggested some other GLMs which might be useful in claims reserving. The purpose of this paper is to construct bounds for the discounted loss reserve within the framework of GLMs. Exact
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
A generalized variational algebra and conserved densities for linear evolution equations
International Nuclear Information System (INIS)
Abellanas, L.; Galindo, A.
1978-01-01
The symbolic algebra of Gel'fand and Dikii is generalized to the case of n variables. Using this algebraic approach a rigorous characterization of the polynomial kernel of the variational derivative is given. This is applied to classify all the conservation laws for linear polynomial evolution equations of arbitrary order. (Auth.)
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Directory of Open Access Journals (Sweden)
Jen-Yuan Chen
2014-01-01
Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.
On Extended Exponential General Linear Methods PSQ with S>Q ...
African Journals Online (AJOL)
This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...
The Use of Hierarchical Generalized Linear Model for Item Dimensionality Assessment
Beretvas, S. Natasha; Williams, Natasha J.
2004-01-01
To assess item dimensionality, the following two approaches are described and compared: hierarchical generalized linear model (HGLM) and multidimensional item response theory (MIRT) model. Two generating models are used to simulate dichotomous responses to a 17-item test: the unidimensional and compensatory two-dimensional (C2D) models. For C2D…
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Molenaar, D.; Tuerlinckx, F.; van der Maas, H.L.J.
2015-01-01
We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only
Toyoizumi, Taro; Rad, Kamiar Rahnama; Paninski, Liam
2009-05-01
There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking network model with delays, which in general may be very difficult to understand analytically. Here we develop mean-field methods for approximating the stimulus-driven firing rates (in both the time-varying and steady-state cases), auto- and cross-correlations, and stimulus-dependent filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and, hence, the total input to a neuron is approximately gaussian. These approximations lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity. These approximations also provide an analytical way to evaluate the linear input-output filter of neurons and how the filters are modulated by network interactions and some stimulus feature. Finally, in the case of strong refractory effects, the mean-field approximations in the generalized linear model become inaccurate; therefore, we introduce a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates and cross-correlations that retain fine temporal behaviors.
A differential-geometric approach to generalized linear models with grouped predictors
Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.
We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Bayesian estimation and hypothesis tests for a circular Generalized Linear Model
Mulder, Kees; Klugkist, Irene
2017-01-01
Motivated by a study from cognitive psychology, we develop a Generalized Linear Model for circular data within the Bayesian framework, using the von Mises distribution. Although circular data arise in a wide variety of scientific fields, the number of methods for their analysis is limited. Our model
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we...
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Hydrodynamics in full general relativity with conservative adaptive mesh refinement
East, William E.; Pretorius, Frans; Stephens, Branson C.
2012-06-01
There is great interest in numerical relativity simulations involving matter due to the likelihood that binary compact objects involving neutron stars will be detected by gravitational wave observatories in the coming years, as well as to the possibility that binary compact object mergers could explain short-duration gamma-ray bursts. We present a code designed for simulations of hydrodynamics coupled to the Einstein field equations targeted toward such applications. This code has recently been used to study eccentric mergers of black hole-neutron star binaries. We evolve the fluid conservatively using high-resolution shock-capturing methods, while the field equations are solved in the generalized-harmonic formulation with finite differences. In order to resolve the various scales that may arise, we use adaptive mesh refinement (AMR) with grid hierarchies based on truncation error estimates. A noteworthy feature of this code is the implementation of the flux correction algorithm of Berger and Colella to ensure that the conservative nature of fluid advection is respected across AMR boundaries. We present various tests to compare the performance of different limiters and flux calculation methods, as well as to demonstrate the utility of AMR flux corrections.
Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.
Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J
2017-09-01
Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Monolithic discretization of linear thermoelasticity problems via adaptive multimesh hp-FEM
Czech Academy of Sciences Publication Activity Database
Šolín, Pavel; Červený, Jakub; Dubcová, Lenka; Andrš, David
2010-01-01
Roč. 234, č. 7 (2010), s. 2350-2357 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : linear elasticity * monolithic discretization * adaptive multimesh hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.029, year: 2010 http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6TYH-4X1J73B-V-11&_cdi=5619&_user=640952&_pii=S0377042709005731&_origin=search&_coverDate=08%2F01%2F2010&_sk=997659992&view=c&wchp=dGLzVzb-zSkWA&md5=3665f1549355544e9c36e84a4adfd086&ie=/sdarticle.pdf
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Evaluation of non-linear adaptive smoothing filter by digital phantom
International Nuclear Information System (INIS)
Sato, Kazuhiro; Ishiya, Hiroki; Oshita, Ryosuke; Yanagawa, Isao; Goto, Mitsunori; Mori, Issei
2008-01-01
As a result of the development of multi-slice CT, diagnoses based on three-dimensional reconstruction images and multi-planar reconstruction have spread. For these applications, which require high z-resolution, thin slice imaging is essential. However, because z-resolution is always based on a trade-off with image noise, thin slice imaging is necessarily accompanied by an increase in noise level. To improve the quality of thin slice images, a non-linear adaptive smoothing filter has been developed, and is being widely applied to clinical use. We developed a digital bar pattern phantom for the purpose of evaluating the effect of this filter and attempted evaluation from an addition image of the bar pattern phantom and the image of the water phantom. The effect of this filter was changed in a complex manner by the contrast and spatial frequency of the original image. We have confirmed the reduced effect of image noise in the low frequency component of the image, but decreased contrast or increased quantity of noise in the image of the high frequency component. This result represents the effect of change in the adaptation of this filter. The digital phantom was useful for this evaluation, but to understand the total effect of filtering, much improvement of the shape of the digital phantom is required. (author)
Linearity enhancement of TVGA based on adaptive sweep optimisation in monostatic radar receiver
Almslmany, Amir; Wang, Caiyun; Cao, Qunsheng
2016-08-01
The limited input dynamic power range of the radar receiver and the power loss due to the targets' ranges are two potential problems in the radar receivers. This paper proposes a model based on the time-varying gain amplifier (TVGA) to compensate the power loss from the targets' ranges, and using the negative impedance compensation technique to enhance the TVGA linearity based on Volterra series. The simulation has been done based on adaptive sweep optimisation (ASO) using advanced design system (ADS) and Matlab. It shows that the suppression of the third-order intermodulation products (IMR3) was carried out for two-tone test, the high-gain accuracy improved by 3 dB, and the high linearity IMR3 improved by 14 dB. The monostatic radar system was tested to detect three targets at different ranges and to compare its probability of detection with the prior models; the results show that the probability of detection has been increased for ASO/TVGA.
Molenaar, D.; Bolsinova, M.
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity
Hierarchical shrinkage priors and model fitting for high-dimensional generalized linear models.
Yi, Nengjun; Ma, Shuangge
2012-11-26
Abstract Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Energy Technology Data Exchange (ETDEWEB)
Escane, J.M. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)
2005-04-01
The first part of this article defines the different elements of an electrical network and the models to represent them. Each model involves the current and the voltage as a function of time. Models involving time functions are simple but their use is not always easy. The Laplace transformation leads to a more convenient form where the variable is no more directly the time. This transformation leads also to the notion of transfer function which is the object of the second part. The third part aims at defining the fundamental operation rules of linear networks, commonly named 'general theorems': linearity principle and superimposition theorem, duality principle, Thevenin theorem, Norton theorem, Millman theorem, triangle-star and star-triangle transformations. These theorems allow to study complex power networks and to simplify the calculations. They are based on hypotheses, the first one is that all networks considered in this article are linear. (J.S.)
Conditional Akaike information under generalized linear and proportional hazards mixed models
Donohue, M. C.; Overholser, R.; Xu, R.; Vaida, F.
2011-01-01
We study model selection for clustered data, when the focus is on cluster specific inference. Such data are often modelled using random effects, and conditional Akaike information was proposed in Vaida & Blanchard (2005) and used to derive an information criterion under linear mixed models. Here we extend the approach to generalized linear and proportional hazards mixed models. Outside the normal linear mixed models, exact calculations are not available and we resort to asymptotic approximations. In the presence of nuisance parameters, a profile conditional Akaike information is proposed. Bootstrap methods are considered for their potential advantage in finite samples. Simulations show that the performance of the bootstrap and the analytic criteria are comparable, with bootstrap demonstrating some advantages for larger cluster sizes. The proposed criteria are applied to two cancer datasets to select models when the cluster-specific inference is of interest. PMID:22822261
An analogue of Morse theory for planar linear networks and the generalized Steiner problem
International Nuclear Information System (INIS)
Karpunin, G A
2000-01-01
A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set
Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R
2008-08-01
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.
Linear and nonlinear associations between general intelligence and personality in Project TALENT.
Major, Jason T; Johnson, Wendy; Deary, Ian J
2014-04-01
Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
A fully general and adaptive inverse analysis method for cementitious materials
DEFF Research Database (Denmark)
Jepsen, Michael S.; Damkilde, Lars; Lövgren, Ingemar
2016-01-01
are applied when modeling the fracture mechanisms in cementitious materials, but the vast development of pseudo-strain hardening, fiber reinforced cementitious materials require inverse methods, capable of treating multi-linear σ - w functions. The proposed method is fully general in the sense that it relies...... on least square fitting between test data obtained from various kinds of test setup, three-point bending or wedge splitting test, and simulated data obtained by either FEA or analytical models. In the current paper adaptive inverse analysis is conducted on test data obtained from three-point bending...... of notched specimens and simulated data from a nonlinear hinge model. The paper shows that the results obtained by means of the proposed method is independent on the initial shape of the σ - w function and the initial guess of the tensile strength. The method provides very accurate fits, and the increased...
A general digital computer procedure for synthesizing linear automatic control systems
International Nuclear Information System (INIS)
Cummins, J.D.
1961-10-01
The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)
International Nuclear Information System (INIS)
Maldonado, G.I.; Turinsky, P.J.; Kropaczek, D.J.
1993-01-01
The computational capability of efficiently and accurately evaluate reactor core attributes (i.e., k eff and power distributions as a function of cycle burnup) utilizing a second-order accurate advanced nodal Generalized Perturbation Theory (GPT) model has been developed. The GPT model is derived from the forward non-linear iterative Nodal Expansion Method (NEM) strategy, thereby extending its inherent savings in memory storage and high computational efficiency to also encompass GPT via the preservation of the finite-difference matrix structure. The above development was easily implemented into the existing coarse-mesh finite-difference GPT-based in-core fuel management optimization code FORMOSA-P, thus combining the proven robustness of its adaptive Simulated Annealing (SA) multiple-objective optimization algorithm with a high-fidelity NEM GPT neutronics model to produce a powerful computational tool used to generate families of near-optimum loading patterns for PWRs. (orig.)
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
An Entropy-Based Approach to Path Analysis of Structural Generalized Linear Models: A Basic Idea
Directory of Open Access Journals (Sweden)
Nobuoki Eshima
2015-07-01
Full Text Available A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Synthesis of general linear networks using causal and J-isometric dilations
International Nuclear Information System (INIS)
D'Attellis, C.E.
1977-06-01
The problem of the synthesis of linear systems characterized by their scattering operator is studied. This problem is considered solved once an adequate dilation for the operator is obtained, from which the synthesis is performed following the method of Saeks (35) and Levan (19). Known results appear sistematized and generalized in this paper, obtaining an unique method of synthesis for different caterories of operators. (Author) [es
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A General Construction of Linear Differential Equations with Solutions of Prescribed Properties
Czech Academy of Sciences Publication Activity Database
Neuman, František
2004-01-01
Roč. 17, č. 1 (2004), s. 71-76 ISSN 0893-9659 R&D Projects: GA AV ČR IAA1019902; GA ČR GA201/99/0295 Institutional research plan: CEZ:AV0Z1019905 Keywords : construction of linear differential equations * prescribed qualitative properties of solutions Subject RIV: BA - General Mathematics Impact factor: 0.414, year: 2004
Directory of Open Access Journals (Sweden)
Tsung-han Tsai
2013-05-01
Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.
Listening to a non-native speaker: Adaptation and generalization
Clarke, Constance M.
2004-05-01
Non-native speech can cause perceptual difficulty for the native listener, but experience can moderate this difficulty. This study explored the perceptual benefit of a brief (approximately 1 min) exposure to foreign-accented speech using a cross-modal word matching paradigm. Processing speed was tracked by recording reaction times (RTs) to visual probe words following English sentences produced by a Spanish-accented speaker. In experiment 1, RTs decreased significantly over 16 accented utterances and by the end were equal to RTs to a native voice. In experiment 2, adaptation to one Spanish-accented voice improved perceptual efficiency for a new Spanish-accented voice, indicating that abstract properties of accented speech are learned during adaptation. The control group in Experiment 2 also adapted to the accented voice during the test block, suggesting adaptation can occur within two to four sentences. The results emphasize the flexibility of the human speech processing system and the need for a mechanism to explain this adaptation in models of spoken word recognition. [Research supported by an NSF Graduate Research Fellowship and the University of Arizona Cognitive Science Program.] a)Currently at SUNY at Buffalo, Dept. of Psych., Park Hall, Buffalo, NY 14260, cclarke2@buffalo.edu
Kun, David William
Unmanned aircraft systems (UASs) are gaining popularity in civil and commercial applications as their lightweight on-board computers become more powerful and affordable, their power storage devices improve, and the Federal Aviation Administration addresses the legal and safety concerns of integrating UASs in the national airspace. Consequently, many researchers are pursuing novel methods to control UASs in order to improve their capabilities, dependability, and safety assurance. The nonlinear control approach is a common choice as it offers several benefits for these highly nonlinear aerospace systems (e.g., the quadrotor). First, the controller design is physically intuitive and is derived from well known dynamic equations. Second, the final control law is valid in a larger region of operation, including far from the equilibrium states. And third, the procedure is largely methodical, requiring less expertise with gain tuning, which can be arduous for a novice engineer. Considering these facts, this thesis proposes a nonlinear controller design method that combines the advantages of adaptive robust control (ARC) with the powerful design tools of linear matrix inequalities (LMI). The ARC-LMI controller is designed with a discontinuous projection-based adaptation law, and guarantees a prescribed transient and steady state tracking performance for uncertain systems in the presence of matched disturbances. The norm of the tracking error is bounded by a known function that depends on the controller design parameters in a known form. Furthermore, the LMI-based part of the controller ensures the stability of the system while overcoming polytopic uncertainties, and minimizes the control effort. This can reduce the number of parameters that require adaptation, and helps to avoid control input saturation. These desirable characteristics make the ARC-LMI control algorithm well suited for the quadrotor UAS, which may have unknown parameters and may encounter external
International Nuclear Information System (INIS)
Iwayama, T; Sueyoshi, M; Watanabe, T
2013-01-01
The linear stability of parallel shear flows for an inviscid generalized two-dimensional (2D) fluid system, the so-called α turbulence system, is studied. This system is characterized by the relation q = −( − Δ) α/2 ψ between the advected scalar q and the stream function ψ. Here, α is a real number not exceeding 3 and q is referred to as the generalized vorticity. In this study, a sufficient condition for linear stability of parallel shear flows is derived using the conservation of wave activity. A stability analysis is then performed for a sheet vortex that violates the stability condition. The instability of a sheet vortex in the 2D Euler system (α = 2) is referred to as a Kelvin–Helmholtz (KH) instability; such an instability for the generalized 2D fluid system is investigated for 0 3−α for 1 < α < 3, where k is the wavenumber of the perturbation. In contrast, for 0 < α ⩽ 1, the growth rate is infinite. In other words, a transition of the growth rate of the perturbation occurs at α = 1. A physical model for KH instability in the generalized 2D fluid system, which can explain the transition of the growth rate of the perturbation at α = 1, is proposed. (paper)
Unified Einstein-Virasoro master equation in the general non-linear $\\sigma$ model
De Boer, J
1997-01-01
The Virasoro master equation (VME) describes the general affine-Virasoro construction T=L^{ab}J_aJ_b+iD^a \\dif J_a in the operator algebra of the WZW model, where L^{ab} is the inverse inertia tensor and D^a is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field L^{ab} to the background fields of the sigma model. For a particular solution L_G^{ab}, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model w...
Robust Adaptive Fuzzy Design for Ship Linear-tracking Control with Input Saturation
Directory of Open Access Journals (Sweden)
Yancai Hu
2017-04-01
Full Text Available A robust adaptive control approach is proposed for underactuated surface ship linear path-tracking control system based on the backstepping control method and Lyapunov stability theory. By employing T-S fuzzy system to approximate nonlinear uncertainties of the control system, the proposed scheme is developed by combining “dynamic surface control” (DSC and “minimal learning parameter” (MLP techniques. The substantial problems of “explosion of complexity” and “dimension curse” existed in the traditional backstepping technique are circumvented, and it is convenient to implement in applications. In addition, an auxiliary system is developed to deal with the effect of input saturation constraints. The control algorithm avoids the singularity problem of controller and guarantees the stability of the closed-loop system. The tracking error converges to an arbitrarily small neighborhood. Finally, MATLAB simulation results are given from an application case of Dalian Maritime University training ship to demonstrate the effectiveness of the proposed scheme.
Directory of Open Access Journals (Sweden)
Muhammad Ammirrul Atiqi Mohd Zainuri
2016-05-01
Full Text Available This paper presents improvement of a harmonics extraction algorithm, known as the fundamental active current (FAC adaptive linear element (ADALINE neural network with the integration of photovoltaic (PV to shunt active power filters (SAPFs as active current source. Active PV injection in SAPFs should reduce dependency on grid supply current to supply the system. In addition, with a better and faster harmonics extraction algorithm, the SAPF should perform well, especially under dynamic PV and load conditions. The role of the actual injection current from SAPF after connecting PVs will be evaluated, and the better effect of using FAC ADALINE will be confirmed. The proposed SAPF was simulated and evaluated in MATLAB/Simulink first. Then, an experimental laboratory prototype was also developed to be tested with a PV simulator (CHROMA 62100H-600S, and the algorithm was implemented using a TMS320F28335 Digital Signal Processor (DSP. From simulation and experimental results, significant improvements in terms of total harmonic distortion (THD, time response and reduction of source power from grid have successfully been verified and achieved.
Adaptive tracking control of leader-following linear multi-agent systems with external disturbances
Lin, Hanquan; Wei, Qinglai; Liu, Derong; Ma, Hongwen
2016-10-01
In this paper, the consensus problem for leader-following linear multi-agent systems with external disturbances is investigated. Brownian motions are used to describe exogenous disturbances. A distributed tracking controller based on Riccati inequalities with an adaptive law for adjusting coupling weights between neighbouring agents is designed for leader-following multi-agent systems under fixed and switching topologies. In traditional distributed static controllers, the coupling weights depend on the communication graph. However, coupling weights associated with the feedback gain matrix in our method are updated by state errors between neighbouring agents. We further present the stability analysis of leader-following multi-agent systems with stochastic disturbances under switching topology. Most traditional literature requires the graph to be connected all the time, while the communication graph is only assumed to be jointly connected in this paper. The design technique is based on Riccati inequalities and algebraic graph theory. Finally, simulations are given to show the validity of our method.
Synchronization of general complex networks via adaptive control ...
Indian Academy of Sciences (India)
2014-03-07
Mar 7, 2014 ... networks with derivative coupling and time-delay coupling was investigated by adaptive control schemes [42]. However ... [41], the synchronization of complex dynamical networks with non-derivative coupling and derivative coupling .... For any symmetric positive definite matrix. M ∈ Rn×n and x,y ∈ Rn, ...
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Generalized partial linear varying multi-index coefficient model for gene-environment interactions.
Liu, Xu; Gao, Bin; Cui, Yuehua
2017-03-01
Epidemiological studies have suggested the joint effect of simultaneous exposures to multiple environments on disease risk. However, how environmental mixtures as a whole jointly modify genetic effect on disease risk is still largely unknown. Given the importance of gene-environment (G×E) interactions on many complex diseases, rigorously assessing the interaction effect between genes and environmental mixtures as a whole could shed novel insights into the etiology of complex diseases. For this purpose, we propose a generalized partial linear varying multi-index coefficient model (GPLVMICM) to capture the genetic effect on disease risk modulated by multiple environments as a whole. GPLVMICM is semiparametric in nature which allows different index loading parameters in different index functions. We estimate the parametric parameters by a profile procedure, and the nonparametric index functions by a B-spline backfitted kernel method. Under some regularity conditions, the proposed parametric and nonparametric estimators are shown to be consistent and asymptotically normal. We propose a generalized likelihood ratio (GLR) test to rigorously assess the linearity of the interaction effect between multiple environments and a gene, while apply a parametric likelihood test to detect linear G×E interaction effect. The finite sample performance of the proposed method is examined through simulation studies and is further illustrated through a real data analysis.
Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method
Jiang, Yuan; He, Yunxiao
2015-01-01
LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study. PMID:27217599
The potential in general linear electrodynamics. Causal structure, propagators and quantization
Energy Technology Data Exchange (ETDEWEB)
Siemssen, Daniel [Department of Mathematical Methods in Physics, Faculty of Physics, University of Warsaw (Poland); Pfeifer, Christian [Institute for Theoretical Physics, Leibniz Universitaet Hannover (Germany); Center of Applied Space Technology and Microgravity (ZARM), Universitaet Bremen (Germany)
2016-07-01
From an axiomatic point of view, the fundamental input for a theory of electrodynamics are Maxwell's equations dF=0 (or F=dA) and dH=J, and a constitutive law H=F, which relates the field strength 2-form F and the excitation 2-form H. In this talk we consider general linear electrodynamics, the theory of electrodynamics defined by a linear constitutive law. The best known application of this theory is the effective description of electrodynamics inside (linear) media (e.g. birefringence). We analyze the classical theory of the electromagnetic potential A before we use methods familiar from mathematical quantum field theory in curved spacetimes to quantize it. Our analysis of the classical theory contains the derivation of retarded and advanced propagators, the analysis of the causal structure on the basis of the constitutive law (instead of a metric) and a discussion of the classical phase space. This classical analysis sets the stage for the construction of the quantum field algebra and quantum states, including a (generalized) microlocal spectrum condition.
Adaptation of generalized Hill inequalities to anisotropic elastic ...
African Journals Online (AJOL)
user
Besides it is proved that there are relations between bulk and shear modulus and eigenvalues of cubic and isotropic symmetry and by these relations, two linear ..... with respect to strain states, the results for optimization with respect to stress states are obtained by interchanging the following terms; T. ˆ and ,ˆE ĉ and ŝ, and i.
Application of conditional moment tests to model checking for generalized linear models.
Pan, Wei
2002-06-01
Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.
Orthogonality of the Mean and Error Distribution in Generalized Linear Models.
Huang, Alan; Rathouz, Paul J
2017-01-01
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution. This generalizes some well-known results for the special cases of normal, gamma and multinomial regression models, and, perhaps more interestingly, suggests that asymptotically efficient estimation and inferences can always be obtained if the error distribution is nonparametrically estimated along with the mean. In contrast, estimation and inferences using misspecified error distributions or variance functions are generally not efficient.
Métris, Aline; George, Susie M; Ropers, Delphine
2017-01-02
Addition of salt to food is one of the most ancient and most common methods of food preservation. However, little is known of how bacterial cells adapt to such conditions. We propose to use piecewise linear approximations to model the regulatory adaptation of Escherichiacoli to osmotic stress. We apply the method to eight selected genes representing the functions known to be at play during osmotic adaptation. The network is centred on the general stress response factor, sigma S, and also includes a module representing the catabolic repressor CRP-cAMP. Glutamate, potassium and supercoiling are combined to represent the intracellular regulatory signal during osmotic stress induced by salt. The output is a module where growth is represented by the concentration of stable RNAs and the transcription of the osmotic gene osmY. The time course of gene expression of transport of osmoprotectant represented by the symporter proP and of the osmY is successfully reproduced by the network. The behaviour of the rpoS mutant predicted by the model is in agreement with experimental data. We discuss the application of the model to food-borne pathogens such as Salmonella; although the genes considered have orthologs, it seems that supercoiling is not regulated in the same way. The model is limited to a few selected genes, but the regulatory interactions are numerous and span different time scales. In addition, they seem to be condition specific: the links that are important during the transition from exponential to stationary phase are not all needed during osmotic stress. This model is one of the first steps towards modelling adaptation to stress in food safety and has scope to be extended to other genes and pathways, other stresses relevant to the food industry, and food-borne pathogens. The method offers a good compromise between systems of ordinary differential equations, which would be unmanageable because of the size of the system and for which insufficient data are available
Non-cooperative stochastic differential game theory of generalized Markov jump linear systems
Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning
2017-01-01
This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...
Liu, Ying; Xu, Zhenhuan; Li, Yuguo
2018-04-01
We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.
International Nuclear Information System (INIS)
Yan Zhenya; Yu Pei
2007-01-01
In this paper, we study chaos (lag) synchronization of a new LC chaotic system, which can exhibit not only a two-scroll attractor but also two double-scroll attractors for different parameter values, via three types of state feedback controls: (i) linear feedback control; (ii) adaptive feedback control; and (iii) a combination of linear feedback and adaptive feedback controls. As a consequence, ten families of new feedback control laws are designed to obtain global chaos lag synchronization for τ < 0 and global chaos synchronization for τ = 0 of the LC system. Numerical simulations are used to illustrate these theoretical results. Each family of these obtained feedback control laws, including two linear (adaptive) functions or one linear function and one adaptive function, is added to two equations of the LC system. This is simpler than the known synchronization controllers, which apply controllers to all equations of the LC system. Moreover, based on the obtained results of the LC system, we also derive the control laws for chaos (lag) synchronization of another new type of chaotic system
Use of generalized linear mixed models for network meta-analysis.
Tu, Yu-Kang
2014-10-01
In the past decade, a new statistical method-network meta-analysis-has been developed to address limitations in traditional pairwise meta-analysis. Network meta-analysis incorporates all available evidence into a general statistical framework for comparisons of multiple treatments. Bayesian network meta-analysis, as proposed by Lu and Ades, also known as "mixed treatments comparisons," provides a flexible modeling framework to take into account complexity in the data structure. This article shows how to implement the Lu and Ades model in the frequentist generalized linear mixed model. Two examples are provided to demonstrate how centering the covariates for random effects estimation within each trial can yield correct estimation of random effects. Moreover, under the correct specification for random effects estimation, the dummy coding and contrast basic parameter coding schemes will yield the same results. It is straightforward to incorporate covariates, such as moderators and confounders, into the generalized linear mixed model to conduct meta-regression for multiple treatment comparisons. Moreover, this approach may be extended easily to other types of outcome variables, such as continuous, counts, and multinomial. © The Author(s) 2014.
Chen, Hsiang-Chun; Wehrly, Thomas E
2015-02-20
The classic concordance correlation coefficient measures the agreement between two variables. In recent studies, concordance correlation coefficients have been generalized to deal with responses from a distribution from the exponential family using the univariate generalized linear mixed model. Multivariate data arise when responses on the same unit are measured repeatedly by several methods. The relationship among these responses is often of interest. In clustered mixed data, the correlation could be present between repeated measurements either within the same observer or between different methods on the same subjects. Indices for measuring such association are needed. This study proposes a series of indices, namely, intra-correlation, inter-correlation, and total correlation coefficients to measure the correlation under various circumstances in a multivariate generalized linear model, especially for joint modeling of clustered count and continuous outcomes. The proposed indices are natural extensions of the concordance correlation coefficient. We demonstrate the methodology with simulation studies. A case example of osteoarthritis study is provided to illustrate the use of these proposed indices. Copyright © 2014 John Wiley & Sons, Ltd.
Generalized adaptive strategies for edge detection in digital imagery
Sundaram, Ramakrishnan
1998-10-01
Edges in digital imagery can be identified from the zero- crossings of Laplacian of Gaussian (LOG) filtered images. Time or frequency-sampled LOG filters have been developed for the detection and localization of edges in digital image data. The image is decomposed into overlapping subblocks and processed in the transform domain. Adaptive algorithms are developed to minimize spurious edge classifications. In order to achieve accurate and efficient implementations, the discrete symmetric cosine transform of the input data is employed in conjunction with adaptive filters. The adaptive selection of the filter coefficients is based on the gradient criterion. For instance, in the case of the frequency-sampled LOG filter, the filter parameter is systemically varied to force the rejection of false or weak edges. In addition, the proposed algorithms easily extend to higher dimensions. This is useful where 3D medical image data containing edge information has been corrupted by noise. This paper employs isotropic and non-isotropic filters to track edges in such images.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Random generalized linear model: a highly accurate and interpretable ensemble predictor.
Song, Lin; Langfelder, Peter; Horvath, Steve
2013-01-16
Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a "thinned" ensemble predictor (involving few features) that retains excellent predictive accuracy. RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM.
Generalized Coherent States of a Particle in a Time-Dependent Linear Potential
International Nuclear Information System (INIS)
Krache, L.; Maamache, M.; Saadi, Y.; Beniaiche, A.
2009-01-01
We derive, with an invariant operator method and unitary transformation approach, that the Schrödinger equation with a time-dependent linear potential possesses an infinite string of shape-preseving wave-packet states |φα,λ(t)) having classical motion. The qualitative properties of the invariant eigenvalue spectrum (discrete or continuous) are described separately for the different values of the frequency ω of a harmonic oscillator. It is also shown that, for a discrete eigenvalue spectrum, the states |φα,n(t)) could be obtained from the coherent state |φα,0(t)). (general)
A generalization of Dirac non-linear electrodynamics, and spinning charged particles
International Nuclear Information System (INIS)
Rodrigues Junior, W.A.; Vaz Junior, J.; Recami, E.
1992-08-01
The Dirac non-linear electrodynamics is generalized by introducing two potentials (namely, the vector potential a and the pseudo-vector potential γ 5 B of the electromagnetic theory with charges and magnetic monopoles), and by imposing the pseudoscalar part of the product W W * to BE zero, with W = A + γ 5 B. Also, is demonstrated that the field equations of such a theory posses a soliton-like solution which can represent a priori a charged particle. (L.C.J.A.)
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Lo, Steson; Andrews, Sally
2015-01-01
Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841
Prospects of measuring general Higgs couplings at e{sup +}e{sup -} linear colliders
Energy Technology Data Exchange (ETDEWEB)
Hagiwara, K. [KEK, Ibaraki (Japan). Theory Group; Ishihara, S. [KEK, Ibaraki (Japan). Theory Group; Department of Physics, Hyogo University of Education, 941-1 Shimokume, Yashiro, Kato, Hyogo 673-1494 (Japan); Kamoshita, J. [Department of Physics, Ochanomizu University, 2-1-1 Otsuka, Bunkyo, Tokyo 112-8610 (Japan); Kniehl, B.A. [II. Institut fuer Theoretische Physik, Universitaet Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany)
2000-06-01
We examine how accurately the general HZV couplings, with V=Z{gamma}, may be determined by studying e{sup +}e{sup -}{yields}Hf anti f processes at future e{sup +}e{sup -} linear colliders. By using the optimal-observable method, which makes use of all available experimental information, we find out which combinations of the various HZV coupling terms may be constrained most efficiently with high luminosity. We also assess the benefits of measuring the tau-lepton helicities, identifying the bottom-hadron charges, polarizing the electron beam and running at two different collider energies. The HZZ couplings are generally found to be well constrained, even without these options, while the HZ{gamma} couplings are not. The constraints on the latter may be significantly improved by beam polarization. (orig.)
Vector generalized linear and additive models with an implementation in R
Yee, Thomas W
2015-01-01
This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable. The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Robust-BD Estimation and Inference for General Partially Linear Models
Directory of Open Access Journals (Sweden)
Chunming Zhang
2017-11-01
Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Directory of Open Access Journals (Sweden)
David T Redden
2006-08-01
Full Text Available Individual genetic admixture estimates, determined both across the genome and at specific genomic regions, have been proposed for use in identifying specific genomic regions harboring loci influencing phenotypes in regional admixture mapping (RAM. Estimates of individual ancestry can be used in structured association tests (SAT to reduce confounding induced by various forms of population substructure. Although presented as two distinct approaches, we provide a conceptual framework in which both RAM and SAT are special cases of a more general linear model. We clarify which variables are sufficient to condition upon in order to prevent spurious associations and also provide a simple closed form "semiparametric" method of evaluating the reliability of individual admixture estimates. An estimate of the reliability of individual admixture estimates is required to make an inherent errors-in-variables problem tractable. Casting RAM and SAT methods as a general linear model offers enormous flexibility enabling application to a rich set of phenotypes, populations, covariates, and situations, including interaction terms and multilocus models. This approach should allow far wider use of RAM and SAT, often using standard software, in addressing admixture as either a confounder of association studies or a tool for finding loci influencing complex phenotypes in species as diverse as plants, humans, and nonhuman animals.
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
To solve the difficulties from the little knowledge about the master–slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown ...
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Ultra Linear Low-loss Varactors & Circuits for Adaptive RF Systems
Huang, C.
2010-01-01
With the evolution of wireless communication, varactors can play an important role in enabling adaptive transceivers as well as phase-diversity systems. This thesis presents various varactor diode-based circuit topologies that facilitate RF adaptivity. The proposed varactor configurations can act as
MCMC Methods for Multi-Response Generalized Linear Mixed Models: The MCMCglmm R Package
Directory of Open Access Journals (Sweden)
Jarrod Had
2010-02-01
Full Text Available Generalized linear mixed models provide a flexible framework for modeling a range of data, although with non-Gaussian response variables the likelihood cannot be obtained in closed form. Markov chain Monte Carlo methods solve this problem by sampling from a series of simpler conditional distributions that can be evaluated. The R package MCMCglmm implements such an algorithm for a range of model fitting problems. More than one response variable can be analyzed simultaneously, and these variables are allowed to follow Gaussian, Poisson, multi(binominal, exponential, zero-inflated and censored distributions. A range of variance structures are permitted for the random effects, including interactions with categorical or continuous variables (i.e., random regression, and more complicated variance structures that arise through shared ancestry, either through a pedigree or through a phylogeny. Missing values are permitted in the response variable(s and data can be known up to some level of measurement error as in meta-analysis. All simu- lation is done in C/ C++ using the CSparse library for sparse linear systems.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Directory of Open Access Journals (Sweden)
Mauricio A. Mazo Lopera
2017-09-01
Full Text Available Gene-environment (GE interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma (PPARG gene associated with diabetes.
A distributed-memory hierarchical solver for general sparse linear systems
Energy Technology Data Exchange (ETDEWEB)
Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering
2017-12-20
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang
2015-11-17
Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
General theories of linear gravitational perturbations to a Schwarzschild black hole
Tattersall, Oliver J.; Ferreira, Pedro G.; Lagos, Macarena
2018-02-01
We use the covariant formulation proposed by Tattersall, Lagos, and Ferreira [Phys. Rev. D 96, 064011 (2017), 10.1103/PhysRevD.96.064011] to analyze the structure of linear perturbations about a spherically symmetric background in different families of gravity theories, and hence study how quasinormal modes of perturbed black holes may be affected by modifications to general relativity. We restrict ourselves to single-tensor, scalar-tensor and vector-tensor diffeomorphism-invariant gravity models in a Schwarzschild black hole background. We show explicitly the full covariant form of the quadratic actions in such cases, which allow us to then analyze odd parity (axial) and even parity (polar) perturbations simultaneously in a straightforward manner.
Hennelly, Bryan M.; Sheridan, John T.
2005-05-01
By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.
Directory of Open Access Journals (Sweden)
Nurdan Cetin
2014-01-01
Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.
Directory of Open Access Journals (Sweden)
Wen-Min Zhou
2013-01-01
Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.
Qamar, Shamsul; Uche, David U; Khan, Farman U; Seidel-Morgenstern, Andreas
2017-05-05
This work is concerned with the analytical solutions and moment analysis of a linear two-dimensional general rate model (2D-GRM) describing the transport of a solute through a chromatographic column of cylindrical geometry. Analytical solutions are derived through successive implementation of finite Hankel and Laplace transformations for two different sets of boundary conditions. The process is further analyzed by deriving analytical temporal moments from the Laplace domain solutions. Radial gradients are typically neglected in liquid chromatography studies which are particularly important in the case of non-perfect injections. Several test problems of single-solute transport are considered. The derived analytical results are validated against the numerical solutions of a high resolution finite volume scheme. The derived analytical results can play an important role in further development of liquid chromatography. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience
Directory of Open Access Journals (Sweden)
Yan Chen
2017-01-01
Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-05-01
We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.
Generalized partially linear single-index model for zero-inflated count data.
Wang, Xiaoguang; Zhang, Jun; Yu, Liang; Yin, Guosheng
2015-02-28
Count data often arise in biomedical studies, while there could be a special feature with excessive zeros in the observed counts. The zero-inflated Poisson model provides a natural approach to accounting for the excessive zero counts. In the semiparametric framework, we propose a generalized partially linear single-index model for the mean of the Poisson component, the probability of zero, or both. We develop the estimation and inference procedure via a profile maximum likelihood method. Under some mild conditions, we establish the asymptotic properties of the profile likelihood estimators. The finite sample performance of the proposed method is demonstrated by simulation studies, and the new model is illustrated with a medical care dataset. Copyright © 2014 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
No Evidence for a Low Linear Energy Transfer Adaptive Response in Irradiated RKO Cells
Energy Technology Data Exchange (ETDEWEB)
Sowa, Marianne B.; Goetz, Wilfried; Baulch, Janet E.; Lewis, Adam J.; Morgan, William F.
2011-01-06
It has become increasingly evident from reports in the literature that there are many confounding factors that are capable of modulating radiation induced non-targeted responses such as the bystander effect and the adaptive response. In this paper we examine recent data that suggest that the observation of non-targeted responses may not be universally observable for differing radiation qualities. We have conducted a study of the adaptive response following low LET exposures for human colon carcinoma cells and failed to observe adaption for the endpoints of clonogenic survival or micronucleus formation.
Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P
2017-01-01
Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.
Directory of Open Access Journals (Sweden)
Jun Xing
Full Text Available Generalized estimating equation (GEE algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM algorithm, the GEE algorithm can well detect quantitative trait locus (QTL, especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO is derived for generalized linear model (GLM with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.
Østergaard, Jacob; Kramer, Mark A; Eden, Uri T
2018-01-01
To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.
Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.
Xing, Jun; Gao, Huijiang; Wu, Yang; Wu, Yani; Li, Hongwang; Yang, Runqing
2014-01-01
Generalized estimating equation (GEE) algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS) method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM) algorithm, the GEE algorithm can well detect quantitative trait locus (QTL), especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO) is derived for generalized linear model (GLM) with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.
Chen, Baojiang; Zhou, Xiao-Hua
2013-01-01
Summary In observational studies, interest often lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often done using longitudinal data. Longitudinal data often feature sampling error and bias due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we consider a generalized partially linear model for incomplete longitudinal data in the presence of the population-level information. A pseudo-empirical likelihood-based method is introduced to incorporate population-level information, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. A three-step estimation procedure is proposed, which makes the computation easier. Several methods that are often used in practice are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. We apply this method to an Alzheimer's disease study. PMID:23413768
Sparse generalized functional linear model for predicting remission status of depression patients.
Liu, Yashu; Nie, Zhi; Zhou, Jiayu; Farnum, Michael; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping
2014-01-01
Complex diseases such as major depression affect people over time in complicated patterns. Longitudinal data analysis is thus crucial for understanding and prognosis of such diseases and has received considerable attention in the biomedical research community. Traditional classification and regression methods have been commonly applied in a simple (controlled) clinical setting with a small number of time points. However, these methods cannot be easily extended to the more general setting for longitudinal analysis, as they are not inherently built for time-dependent data. Functional regression, in contrast, is capable of identifying the relationship between features and outcomes along with time information by assuming features and/or outcomes as random functions over time rather than independent random variables. In this paper, we propose a novel sparse generalized functional linear model for the prediction of treatment remission status of the depression participants with longitudinal features. Compared to traditional functional regression models, our model enables high-dimensional learning, smoothness of functional coefficients, longitudinal feature selection and interpretable estimation of functional coefficients. Extensive experiments have been conducted on the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) data set and the results show that the proposed sparse functional regression method achieves significantly higher prediction power than existing approaches.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Calabrese, Ana; Schumacher, Joseph W; Schneider, David M; Paninski, Liam; Woolley, Sarah M N
2011-01-11
In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
International Nuclear Information System (INIS)
Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta
2012-01-01
Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Huang, Zhibin; Mayr, Nina A; Lo, Simon S; Wang, Jian Z; Jia, Guang; Yuh, William T C; Johnke, Roberta
2012-01-01
It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T(r) as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ(2) (min) of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T(r) = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T(r) = 1.1 h for rat spinal cord. These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel
Kleinschmidt, Dave F.; Jaeger, T. Florian
2016-01-01
Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension. PMID:25844873
Geedipally, Srinivas Reddy; Lord, Dominique; Dhavala, Soma Sekhar
2012-03-01
There has been a considerable amount of work devoted by transportation safety analysts to the development and application of new and innovative models for analyzing crash data. One important characteristic about crash data that has been documented in the literature is related to datasets that contained a large amount of zeros and a long or heavy tail (which creates highly dispersed data). For such datasets, the number of sites where no crash is observed is so large that traditional distributions and regression models, such as the Poisson and Poisson-gamma or negative binomial (NB) models cannot be used efficiently. To overcome this problem, the NB-Lindley (NB-L) distribution has recently been introduced for analyzing count data that are characterized by excess zeros. The objective of this paper is to document the application of a NB generalized linear model with Lindley mixed effects (NB-L GLM) for analyzing traffic crash data. The study objective was accomplished using simulated and observed datasets. The simulated dataset was used to show the general performance of the model. The model was then applied to two datasets based on observed data. One of the dataset was characterized by a large amount of zeros. The NB-L GLM was compared with the NB and zero-inflated models. Overall, the research study shows that the NB-L GLM not only offers superior performance over the NB and zero-inflated models when datasets are characterized by a large number of zeros and a long tail, but also when the crash dataset is highly dispersed. Published by Elsevier Ltd.
Using Generalized Linear Mixed Models to Evaluate Inconsistency within a Network Meta-Analysis.
Tu, Yu-Kang
2015-12-01
Network meta-analysis compares multiple treatments by incorporating direct and indirect evidence into a general statistical framework. One issue with the validity of network meta-analysis is inconsistency between direct and indirect evidence within a loop formed by three treatments. Recently, the inconsistency issue has been explored further and a complex design-by-treatment interaction model proposed. The aim of this article was to show how to evaluate the design-by-treatment interaction model using the generalized linear mixed model. We proposed an arm-based approach to evaluating the design-by-treatment inconsistency, which is flexible in modeling different types of outcome variables. We used the smoking cessation data to compare results from our arm-based approach with those from the standard contrast-based approach. Because the contrast-based approach requires transformation of data, our example showed that such a transformation may yield biases in the treatment effect and inconsistency evaluation, when event rates were low in some treatments. We also compared contrast-based and arm-based models in the evaluation of design inconsistency when different heterogeneity variances were estimated, and the arm-based model yielded more accurate results. Because some statistical software commands can detect the collinearity among variables and automatically remove the redundant ones, we can use this advantage to help with placing the inconsistency parameters. This could be very useful for a network meta-analysis involving many designs and treatments. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation
Moore, T. E.; Khazanov, G. V.
2011-01-01
Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].
Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.
2015-04-01
Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.
Effect of Spatial Decorrelation on Nulling Performance of Linear Adaptive Array
1983-06-01
fully adaptive arrays are considered, more analysis and evaluation is carried out for the FORM 43 E,,NO ’NO ,SO OL E DD I JAN 73 1473 EDITION OF...University SRI International ATTN: C. Meng ATTN: A. Burns ATTN: K. Potocki ATTN: G. Price ATTN: J. Phillips ATTN: R. Tsunoca ATTN: T. Evans ATTN: J. Vickrey
Feng, Jianfeng; Gao, Yongfei; Ji, Yijun; Zhu, Lin
2018-03-05
Predicting the toxicity of chemical mixtures is difficult because of the additive, antagonistic, or synergistic interactions among the mixture components. Antagonistic and synergistic interactions are dominant in metal mixtures, and their distributions may correlate with exposure concentrations. However, whether the interaction types of metal mixtures change at different time points during toxicodynamic (TD) processes is undetermined because of insufficient appropriate models and metal bioaccumulation data at different time points. In the present study, the generalized linear model (GLM) was used to illustrate the combined toxicities of binary metal mixtures, such as Cu-Zn, Cu-Cd, and Cd-Pb, to zebrafish larvae (Danio rerio). GLM was also used to identify possible interaction types among these method for the traditional concentration addition (CA) and independent action (IA) models. Then the GLM were applied to quantify the different possible interaction types for metal mixture toxicity (Cu-Zn, Cu-Cd, and Cd-Pb to D. rerio and Ni-Co to Oligochaeta Enchytraeus crypticus) during the TD process at different exposure times. We found different metal interaction responses in the TD process and interactive coefficients significantly changed at different exposure times (pmixture toxicology on organisms. Moreover, care should be taken when evaluating interactions in toxicity prediction because results may vary at different time points. The GLM could be an alternative or complementary approach for BLM to analyze and predict metal mixture toxicity. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms
International Nuclear Information System (INIS)
Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.
2008-01-01
This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events
Generalized linear discriminant analysis: a unified framework and efficient model selection.
Ji, Shuiwang; Ye, Jieping
2008-10-01
High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.
Directory of Open Access Journals (Sweden)
Tülin Acar
2012-01-01
Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.
Feng, Jian-Ying; Zhang, Jin; Zhang, Wen-Jie; Wang, Shi-Bo; Han, Shi-Feng; Zhang, Yuan-Ming
2013-01-01
Many important phenotypic traits in plants are ordinal. However, relatively little is known about the methodologies for ordinal trait association studies. In this study, we proposed a hierarchical generalized linear mixed model for mapping quantitative trait locus (QTL) of ordinal traits in crop cultivars. In this model, all the main-effect QTL and QTL-by-environment interaction were treated as random, while population mean, environmental effect and population structure were fixed. In the estimation of parameters, the pseudo data normal approximation of likelihood function and empirical Bayes approach were adopted. A series of Monte Carlo simulation experiments were performed to confirm the reliability of new method. The result showed that new method works well with satisfactory statistical power and precision. The new method was also adopted to dissect the genetic basis of soybean alkaline-salt tolerance in 257 soybean cultivars obtained, by stratified random sampling, from 6 geographic ecotypes in China. As a result, 6 main-effect QTL and 3 QTL-by-environment interactions were identified.
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Mitigating Bias in Generalized Linear Mixed Models: The Case for Bayesian Nonparametrics.
Antonelli, Joseph; Trippa, Lorenzo; Haneuse, Sebastien
2016-02-01
Generalized linear mixed models are a common statistical tool for the analysis of clustered or longitudinal data where correlation is accounted for through cluster-specific random effects. In practice, the distribution of the random effects is typically taken to be a Normal distribution, although if this does not hold then the model is misspecified and standard estimation/inference may be invalid. An alternative is to perform a so-called nonparametric Bayesian analyses in which one assigns a Dirichlet process (DP) prior to the unknown distribution of the random effects. In this paper we examine operating characteristics for estimation of fixed effects and random effects based on such an analysis under a range of "true" random effects distributions. As part of this we investigate various approaches for selection of the precision parameter of the DP prior. In addition, we illustrate the use of the methods with an analysis of post-operative complications among n = 18, 643 female Medicare beneficiaries who underwent a hysterectomy procedure at N = 503 hospitals in the US. Overall, we conclude that using the DP priori n modeling the random effect distribution results in large reductions of bias with little loss of efficiency. While no single choice for the precision parameter will be optimal in all settings, certain strategies such as importance sampling or empirical Bayes can be used to obtain reasonable results in a broad range of data scenarios.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Bivariate Random Effects Meta-analysis of Diagnostic Studies Using Generalized Linear Mixed Models
GUO, HONGFEI; ZHOU, YIJIE
2011-01-01
Bivariate random effect models are currently one of the main methods recommended to synthesize diagnostic test accuracy studies. However, only the logit-transformation on sensitivity and specificity has been previously considered in the literature. In this paper, we consider a bivariate generalized linear mixed model to jointly model the sensitivities and specificities, and discuss the estimation of the summary receiver operating characteristic curve (ROC) and the area under the ROC curve (AUC). As the special cases of this model, we discuss the commonly used logit, probit and complementary log-log transformations. To evaluate the impact of misspecification of the link functions on the estimation, we present two case studies and a set of simulation studies. Our study suggests that point estimation of the median sensitivity and specificity, and AUC is relatively robust to the misspecification of the link functions. However, the misspecification of link functions has a noticeable impact on the standard error estimation and the 95% confidence interval coverage, which emphasizes the importance of choosing an appropriate link function to make statistical inference. PMID:19959794
Jung, Jee-Hyun; Choi, Seung Bae; Hong, Sang Hee; Chae, Young Sun; Kim, Ha Na; Yim, Un Hyuk; Ha, Sung Yong; Han, Gi Myung; Kim, Dae Jung; Shim, Won Joon
2014-01-15
To evaluate the health status at six different study areas, we used the generalized linear model approach with selected biochemical markers in resident fish from uncontaminated and contaminated sites. We also confirmed the independence between the biochemical indices and the morphometric indices including the hepato-somatic index (HSI), gonado-somatic index (GSI), and condition factor (CF) in fish from the sampling areas. The effect of area on the presence of biotransformation markers (ethoxyresorufin-O-deethylase activity; EROD) was significantly high in Masan Bay. The area with the greatest effect on acetylcholinesterase (AChE) activity was Jindong Bay, while there was no significant effect of GSI, HSI, CF, and sex in the EROD model and HSI, CF and sex in the AChE model. These results clarify that fish from Masan, Gwangyang and Jindong Bay were affected by pollutant stress, and the analysis of sensitive biochemical responses allowed for an improved interpretation of the results. Copyright © 2013 Elsevier Ltd. All rights reserved.
Establishment of a new initial dose plan for vancomycin using the generalized linear mixed model.
Kourogi, Yasuyuki; Ogata, Kenji; Takamura, Norito; Tokunaga, Jin; Setoguchi, Nao; Kai, Mitsuhiro; Tanaka, Emi; Chiyotanda, Susumu
2017-04-08
When administering vancomycin hydrochloride (VCM), the initial dose is adjusted to ensure that the steady-state trough value (Css-trough) remains within the effective concentration range. However, the Css-trough (population mean method predicted value [PMMPV]) calculated using the population mean method (PMM) often deviate from the effective concentration range. In this study, we used the generalized linear mixed model (GLMM) for initial dose planning to create a model that accurately predicts Css-trough, and subsequently assessed its prediction accuracy. The study included 46 subjects whose trough values were measured after receiving VCM. We calculated the Css-trough (Bayesian estimate predicted value [BEPV]) from the Bayesian estimates of trough values. Using the patients' medical data, we created models that predict the BEPV and selected the model with minimum information criterion (GLMM best model). We then calculated the Css-trough (GLMMPV) from the GLMM best model and compared the BEPV correlation with GLMMPV and with PMMPV. The GLMM best model was {[0.977 + (males: 0.029 or females: -0.081)] × PMMPV + 0.101 × BUN/adjusted SCr - 12.899 × SCr adjusted amount}. The coefficients of determination for BEPV/GLMMPV and BEPV/PMMPV were 0.623 and 0.513, respectively. We demonstrated that the GLMM best model was more accurate in predicting the Css-trough than the PMM.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.
Directory of Open Access Journals (Sweden)
Hairong Huang
Full Text Available This study identified potential general influencing factors for a mathematical prediction of implant stability quotient (ISQ values in clinical practice.We collected the ISQ values of 557 implants from 2 different brands (SICace and Osstem placed by 2 surgeons in 336 patients. Surgeon 1 placed 329 SICace implants, and surgeon 2 placed 113 SICace implants and 115 Osstem implants. ISQ measurements were taken at T1 (immediately after implant placement and T2 (before dental restoration. A multivariate linear regression model was used to analyze the influence of the following 11 candidate factors for stability prediction: sex, age, maxillary/mandibular location, bone type, immediate/delayed implantation, bone grafting, insertion torque, I-stage or II-stage healing pattern, implant diameter, implant length and T1-T2 time interval.The need for bone grafting as a predictor significantly influenced ISQ values in all three groups at T1 (weight coefficients ranging from -4 to -5. In contrast, implant diameter consistently influenced the ISQ values in all three groups at T2 (weight coefficients ranging from 3.4 to 4.2. Other factors, such as sex, age, I/II-stage implantation and bone type, did not significantly influence ISQ values at T2, and implant length did not significantly influence ISQ values at T1 or T2.These findings provide a rational basis for mathematical models to quantitatively predict the ISQ values of implants in clinical practice.
Directory of Open Access Journals (Sweden)
Miguel Flores
2016-11-01
Full Text Available This work aims to classify the DNA sequences of healthy and malignant cancer respectively. For this, supervised and unsupervised classification methods from a functional context are used; i.e. each strand of DNA is an observation. The observations are discretized, for that reason different ways to represent these observations with functions are evaluated. In addition, an exploratory study is done: estimating the mean and variance of each functional type of cancer. For the unsupervised classification method, hierarchical clustering with different measures of functional distance is used. On the other hand, for the supervised classification method, a functional generalized linear model is used. For this model the first and second derivatives are used which are included as discriminating variables. It has been verified that one of the advantages of working in the functional context is to obtain a model to correctly classify cancers by 100%. For the implementation of the methods it has been used the fda.usc R package that includes all the techniques of functional data analysis used in this work. In addition, some that have been developed in recent decades. For more details of these techniques can be consulted Ramsay, J. O. and Silverman (2005 and Ferraty et al. (2006.
Mainardi, Francesco; Masina, Enrico; Spada, Giorgio
2018-02-01
We present a new rheological model depending on a real parameter ν \\in [0,1], which reduces to the Maxwell body for ν =0 and to the Becker body for ν =1. The corresponding creep law is expressed in an integral form in which the exponential function of the Becker model is replaced and generalized by a Mittag-Leffler function of order ν . Then the corresponding non-dimensional creep function and its rate are studied as functions of time for different values of ν in order to visualize the transition from the classical Maxwell body to the Becker body. Based on the hereditary theory of linear viscoelasticity, we also approximate the relaxation function by solving numerically a Volterra integral equation of the second kind. In turn, the relaxation function is shown versus time for different values of ν to visualize again the transition from the classical Maxwell body to the Becker body. Furthermore, we provide a full characterization of the new model by computing, in addition to the creep and relaxation functions, the so-called specific dissipation Q^{-1} as a function of frequency, which is of particular relevance for geophysical applications.
Cressman, Erin K; Henriques, Denise Y P
2015-07-01
Visuomotor learning results in changes in both motor and sensory systems (Cressman EK, Henriques DY. J Neurophysiol 102: 3505-3518, 2009), such that reaches are adapted and sense of felt hand position recalibrated after reaching with altered visual feedback of the hand. Moreover, visuomotor learning has been shown to generalize such that reach adaptation achieved at a trained target location can influence reaches to novel target directions (Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. J Neurosci 20: 8916-8924, 2000). We looked to determine whether proprioceptive recalibration also generalizes to novel locations. Moreover, we looked to establish the relationship between reach adaptation and changes in sense of felt hand position by determining whether proprioceptive recalibration generalizes to novel targets in a similar manner as reach adaptation. On training trials, subjects reached to a single target with aligned or misaligned cursor-hand feedback, in which the cursor was either rotated or scaled in extent relative to hand movement. After reach training, subjects reached to the training target and novel targets (including targets from a second start position) without visual feedback to assess generalization of reach adaptation. Subjects then performed a proprioceptive estimation task, in which they indicated the position of their hand relative to visual reference markers placed at similar locations as the trained and novel reach targets. Results indicated that shifts in hand position generalized across novel locations, independent of reach adaptation. Thus these distinct sensory and motor generalization patterns suggest that reach adaptation and proprioceptive recalibration arise from independent error signals and that changes in one system cannot guide adjustments in the other. Copyright © 2015 the American Physiological Society.
Adaptions of ArcGIS' Linear Referencing System to the Coastal Environment
DEFF Research Database (Denmark)
Balstrøm, Thomas
2008-01-01
For many years it has been problematic to store information for the coastal environment in a GIS. However, a system named "Linear referencing System" based upon a dynamic segmentation principle implemented in ESRIs ArcGIS 9 software has now made it possible to store and analyze information...
Dos Santos, P Lopes; Deshpande, Sunil; Rivera, Daniel E; Azevedo-Perdicoúlis, T-P; Ramos, J A; Younger, Jarred
2013-12-31
There is good evidence that naltrexone, an opioid antagonist, has a strong neuroprotective role and may be a potential drug for the treatment of fibromyalgia. In previous work, some of the authors used experimental clinical data to identify input-output linear time invariant models that were used to extract useful information about the effect of this drug on fibromyalgia symptoms. Additional factors such as anxiety, stress, mood, and headache, were considered as additive disturbances. However, it seems reasonable to think that these factors do not affect the drug actuation, but only the way in which a participant perceives how the drug actuates on herself. Under this hypothesis the linear time invariant models can be replaced by State-Space Affine Linear Parameter Varying models where the disturbances are seen as a scheduling signal signal only acting at the parameters of the output equation. In this paper a new algorithm for identifying such a model is proposed. This algorithm minimizes a quadratic criterion of the output error. Since the output error is a linear function of some parameters, the Affine Linear Parameter Varying system identification is formulated as a separable nonlinear least squares problem. Likewise other identification algorithms using gradient optimization methods several parameter derivatives are dynamical systems that must be simulated. In order to increase time efficiency a canonical parametrization that minimizes the number of systems to be simulated is chosen. The effectiveness of the algorithm is assessed in a case study where an Affine Parameter Varying Model is identified from the experimental data used in the previous study and compared with the time-invariant model.
International Nuclear Information System (INIS)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-01-01
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
Jackson, Kate; Correia, Carlos; Lardière, Olivier; Andersen, Dave; Bradley, Colin
2015-01-15
We use a theoretical framework to analytically assess temporal prediction error functions on von-Kármán turbulence when a zonal representation of wavefronts is assumed. The linear prediction models analyzed include auto-regressive of an order up to three, bilinear interpolation functions, and a minimum mean square error predictor. This is an extension of the authors' previously published work Correia et al. [J. Opt. Soc. Am. A31, 101 (2014)JOAOD61084-752910.1364/JOSAA.31.000101], in which the efficacy of various temporal prediction models was established. Here we examine the tolerance of these algorithms to specific forms of model errors, thus defining the expected change in behavior of the previous results under less ideal conditions. Results show that ±100% wind speed error and ±50 deg are tolerable before the best linear predictor delivers poorer performance than the no-prediction case.
Energy Technology Data Exchange (ETDEWEB)
Sun, Winston Y. [Univ. of California, Berkeley, CA (United States)
1993-04-01
This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.
DEFF Research Database (Denmark)
Bergami, Leonardo; Poulsen, Niels Kjølstad
2015-01-01
The paper proposes a smart rotor configuration where adaptive trailing edge flaps (ATEFs) are employed for active alleviation of the aerodynamic loads on the blades of the NREL 5 MW reference turbine. The flaps extend for 20% of the blade length and are controlled by a linear quadratic (LQ...... signals described by simple functions of the blade azimuthal position are included in the identification to avoid biases from the periodic load variations observed on a rotating blade. The LQ controller uses the same periodic disturbance signals to handle anticipation of the loads periodic component...
Directory of Open Access Journals (Sweden)
Wanfang Shen
2012-01-01
Full Text Available The mathematical formulation for a quadratic optimal control problem governed by a linear quasiparabolic integrodifferential equation is studied. The control constrains are given in an integral sense: Uad={u∈X;∫ΩUu⩾0, t∈[0,T]}. Then the a posteriori error estimates in L∞(0,T;H1(Ω-norm and L2(0,T;L2(Ω-norm for both the state and the control approximation are given.
Estimating organ doses from tube current modulated CT examinations using a generalized linear model.
Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F
2017-04-01
Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter
Gorissen, B.L.; Blanc, J.P.C.; den Hertog, D.; Ben-Tal, A.
We propose a new way to derive tractable robust counterparts of a linear program based on the duality between the robust (“pessimistic”) primal problem and its “optimistic” dual. First we obtain a new convex reformulation of the dual problem of a robust linear program, and then show how to construct
Generalized linear mixed models can detect unimodal species-environment relationships
Jamil, Tahira; Braak, ter C.J.F.
2013-01-01
Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in
Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M
2017-06-01
Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.
Protein structure validation by generalized linear model root-mean-square deviation prediction.
Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter
2012-02-01
Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores. Copyright © 2011 The Protein Society.
Modeling psychophysical data at the population-level: the generalized linear mixed model.
Moscatelli, Alessandro; Mezzetti, Maura; Lacquaniti, Francesco
2012-10-25
In psychophysics, researchers usually apply a two-level model for the analysis of the behavior of the single subject and the population. This classical model has two main disadvantages. First, the second level of the analysis discards information on trial repetitions and subject-specific variability. Second, the model does not easily allow assessing the goodness of fit. As an alternative to this classical approach, here we propose the Generalized Linear Mixed Model (GLMM). The GLMM separately estimates the variability of fixed and random effects, it has a higher statistical power, and it allows an easier assessment of the goodness of fit compared with the classical two-level model. GLMMs have been frequently used in many disciplines since the 1990s; however, they have been rarely applied in psychophysics. Furthermore, to our knowledge, the issue of estimating the point-of-subjective-equivalence (PSE) within the GLMM framework has never been addressed. Therefore the article has two purposes: It provides a brief introduction to the usage of the GLMM in psychophysics, and it evaluates two different methods to estimate the PSE and its variability within the GLMM framework. We compare the performance of the GLMM and the classical two-level model on published experimental data and simulated data. We report that the estimated values of the parameters were similar between the two models and Type I errors were below the confidence level in both models. However, the GLMM has a higher statistical power than the two-level model. Moreover, one can easily compare the fit of different GLMMs according to different criteria. In conclusion, we argue that the GLMM can be a useful method in psychophysics.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
The Adapted Ordering Method for Lie algebras and superalgebras and their generalizations
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, Beatriz [Instituto de Matematicas y Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); NIKHEF-H, Kruislaan 409, NL-1098 SJ Amsterdam (Netherlands)
2008-02-01
In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras in two dimensions. It allows us to determine maximal dimensions for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this paper we present the Adapted Ordering Method for general Lie algebras and superalgebras and their generalizations, provided they can be triangulated. We also review briefly the results obtained for the Virasoro algebra and for the N = 2 and Ramond N = 1 superconformal algebras.
Ma, Yaping; Xiao, Yegui; Wei, Guo; Sun, Jinwei
2016-01-01
In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TSpca) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times more
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Directory of Open Access Journals (Sweden)
K. R. Subhashini
2014-01-01
synthesis is termed as the variation in the element excitation amplitude and nonlinear synthesis is process of variation in element angular position. Both ADE and AFA are a high-performance stochastic evolutionary algorithm used to solve N-dimensional problems. These methods are used to determine a set of parameters of antenna elements that provide the desired radiation pattern. The effectiveness of the algorithms for the design of conformal antenna array is shown by means of numerical results. Comparison with other methods is made whenever possible. The results reveal that nonlinear synthesis, aided by the discussed techniques, provides considerable enhancements compared to linear synthesis.
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Robinson, Tyler D.; Crisp, David
2018-05-01
Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
International Nuclear Information System (INIS)
Xu Yuhua; Zhou Wuneng; Fang Jian'an; Lu Hongqian
2009-01-01
This Letter proposes an approach to identify the topological structure and unknown parameters for uncertain general complex networks simultaneously. By designing effective adaptive controllers, we achieve synchronization between two complex networks. The unknown network topological structure and system parameters of uncertain general complex dynamical networks are identified simultaneously in the process of synchronization. Several useful criteria for synchronization are given. Finally, an illustrative example is presented to demonstrate the application of the theoretical results.
Energy Technology Data Exchange (ETDEWEB)
Xu Yuhua, E-mail: yuhuaxu2004@163.co [College of Information Science and Technology, Donghua University, Shanghai 201620 (China) and Department of Maths, Yunyang Teacher' s College, Hubei 442000 (China); Zhou Wuneng, E-mail: wnzhou@163.co [College of Information Science and Technology, Donghua University, Shanghai 201620 (China); Fang Jian' an [College of Information Science and Technology, Donghua University, Shanghai 201620 (China); Lu Hongqian [Shandong Institute of Light Industry, Shandong Jinan 250353 (China)
2009-12-28
This Letter proposes an approach to identify the topological structure and unknown parameters for uncertain general complex networks simultaneously. By designing effective adaptive controllers, we achieve synchronization between two complex networks. The unknown network topological structure and system parameters of uncertain general complex dynamical networks are identified simultaneously in the process of synchronization. Several useful criteria for synchronization are given. Finally, an illustrative example is presented to demonstrate the application of the theoretical results.
Directory of Open Access Journals (Sweden)
Bahita Mohamed
2011-01-01
Full Text Available In this work, we introduce an adaptive neural network controller for a class of nonlinear systems. The approach uses two Radial Basis Functions, RBF networks. The first RBF network is used to approximate the ideal control law which cannot be implemented since the dynamics of the system are unknown. The second RBF network is used for on-line estimating the control gain which is a nonlinear and unknown function of the states. The updating laws for the combined estimator and controller are derived through Lyapunov analysis. Asymptotic stability is established with the tracking errors converging to a neighborhood of the origin. Finally, the proposed method is applied to control and stabilize the inverted pendulum system.
Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Caillet, Vincent; Hewson, Emily; Poulsen, Per Rugaard; Bromley, Regina; Bell, Linda; Eade, Thomas; Kneebone, Andrew; Martin, Jarad; Booth, Jeremy T
2018-02-07
Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and investigate real-time IGART using a standard linear accelerator. We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured. Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87-100%). The geometric accuracy of the KIM system was -0.1 ± 0.4, 0.2 ± 0.2 and -0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose. The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option. Copyright © 2018 Elsevier B.V. All rights reserved.
Adaptive vision-based control of an unmanned aerial vehicle without linear velocity measurements.
Jabbari Asl, Hamed; Yoon, Jungwon
2016-11-01
In this paper, an image-based visual servo controller is designed for an unmanned aerial vehicle. The main objective is to use flow of image features as the velocity cue to compensate for the low quality of linear velocity information obtained from accelerometers. Nonlinear observers are designed to estimate this flow. The proposed controller is bounded, which can help to keep the target points in the field of view of the camera. The main advantages over the previous full dynamic observer-based methods are that, the controller is robust with respect to unknown image depth, and also no yaw information is required. The complete stability analysis is presented and asymptotic convergence of the error signals is guaranteed. Simulation results show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Dauda GuliburYAKUBU
2012-12-01
Full Text Available Accurate solutions to initial value systems of ordinary differential equations may be approximated efficiently by Runge-Kutta methods or linear multistep methods. Each of these has limitations of one sort or another. In this paper we consider, as a middle ground, the derivation of continuous general linear methods for solution of stiff systems of initial value problems in ordinary differential equations. These methods are designed to combine the advantages of both Runge-Kutta and linear multistep methods. Particularly, methods possessing the property of A-stability are identified as promising methods within this large class of general linear methods. We show that the continuous general linear methods are self-starting and have more ability to solve the stiff systems of ordinary differential equations, than the discrete ones. The initial value systems of ordinary differential equations are solved, for instance, without looking for any other method to start the integration process. This desirable feature of the proposed approach leads to obtaining very high accuracy of the solution of the given problem. Illustrative examples are given to demonstrate the novelty and reliability of the methods.
Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.
2017-12-01
Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with
Peyton Jones, James C; Muske, Kenneth R
2009-10-01
Linear look-up tables are widely used to approximate and characterize complex nonlinear functional relationships between system input and output. However, both the initial calibration and subsequent real-time adaptation of these tables can be time consuming and prone to error as a result of the large number of table parameters typically required to map the system and the uncertainties and noise in the experimental data on which the calibration is based. In this paper, a new method is presented for identifying or adapting the look-up table parameters using a recursive least-squares based approach. The new method differs from standard recursive least squares algorithms because it exploits the structure of the look-up table equations in order to perform the identification process in a way that is highly computationally and memory efficient. The technique can therefore be implemented within the constraints of typical embedded applications. In the present study, the technique is applied to the identification of the volumetric efficiency look-up table commonly used in gasoline engine fueling strategies. The technique is demonstrated on a Ford 2.0L I4 Duratec engine using time-delayed feedback from a sensor in the exhaust manifold in order to adapt the table parameters online.
Directory of Open Access Journals (Sweden)
Guangtao Chen
2018-03-01
Full Text Available Functional electrical stimulation (FES is important in gait rehabilitation for patients with dropfoot. Since there are time-varying velocities during FES-assisted walking, it is difficult to achieve a good movement performance during walking. To account for the time-varying walking velocities, seven poststroke subjects were recruited and fuzzy logic control and a linear model were applied in FES-assisted walking to enable intensity- and duration-adaptive stimulation (IDAS for poststroke subjects with dropfoot. In this study, the performance of IDAS was evaluated using kinematic data, and was compared with the performance under no stimulation (NS, FES-assisted walking triggered by heel-off stimulation (HOS, and speed-adaptive stimulation. A larger maximum ankle dorsiflexion angle in the IDAS condition than those in other conditions was observed. The ankle plantar flexion angle in the IDAS condition was similar to that of normal walking. Improvement in the maximum ankle dorsiflexion and plantar flexion angles in the IDAS condition could be attributed to having the appropriate stimulation intensity and duration. In summary, the intensity- and duration-adaptive controller can attain better movement performance and may have great potential in future clinical applications.
Yoo, Yun Joo; Sun, Lei; Poirier, Julia G; Paterson, Andrew D; Bull, Shelley B
2017-02-01
By jointly analyzing multiple variants within a gene, instead of one at a time, gene-based multiple regression can improve power, robustness, and interpretation in genetic association analysis. We investigate multiple linear combination (MLC) test statistics for analysis of common variants under realistic trait models with linkage disequilibrium (LD) based on HapMap Asian haplotypes. MLC is a directional test that exploits LD structure in a gene to construct clusters of closely correlated variants recoded such that the majority of pairwise correlations are positive. It combines variant effects within the same cluster linearly, and aggregates cluster-specific effects in a quadratic sum of squares and cross-products, producing a test statistic with reduced degrees of freedom (df) equal to the number of clusters. By simulation studies of 1000 genes from across the genome, we demonstrate that MLC is a well-powered and robust choice among existing methods across a broad range of gene structures. Compared to minimum P-value, variance-component, and principal-component methods, the mean power of MLC is never much lower than that of other methods, and can be higher, particularly with multiple causal variants. Moreover, the variation in gene-specific MLC test size and power across 1000 genes is less than that of other methods, suggesting it is a complementary approach for discovery in genome-wide analysis. The cluster construction of the MLC test statistics helps reveal within-gene LD structure, allowing interpretation of clustered variants as haplotypic effects, while multiple regression helps to distinguish direct and indirect associations. © 2016 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.
2017-01-01
Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...
Raymond L. Czaplewski
1973-01-01
A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...
Wang, Bing; Ninomiya, Yasuharu; Tanaka, Kaoru; Maruyama, Kouichi; Varès, Guillaume; Eguchi-Kasai, Kiyomi; Nenoi, Mitsuru
2012-12-01
Adaptive response (AR) of low linear energy transfer (LET) irradiations for protection against teratogenesis induced by high LET irradiations is not well documented. In this study, induction of AR by X-rays against teratogenesis induced by accelerated heavy ions was examined in fetal mice. Irradiations of pregnant C57BL/6J mice were performed by delivering a priming low dose from X-rays at 0.05 or 0.30 Gy on gestation day 11 followed one day later by a challenge high dose from either X-rays or accelerated heavy ions. Monoenergetic beams of carbon, neon, silicon, and iron with the LET values of about 15, 30, 55, and 200 keV/μm, respectively, were examined. Significant suppression of teratogenic effects (fetal death, malformation of live fetuses, or low body weight) was used as the endpoint for judgment of a successful AR induction. Existence of AR induced by low-LET X-rays against teratogenic effect induced by high-LET accelerated heavy ions was demonstrated. The priming low dose of X-rays significantly reduced the occurrence of prenatal fetal death, malformation, and/or low body weight induced by the challenge high dose from either X-rays or accelerated heavy ions of carbon, neon or silicon but not iron particles. Successful AR induction appears to be a radiation quality event, depending on the LET value and/or the particle species of the challenge irradiations. These findings would provide a new insight into the study on radiation-induced AR in utero. © 2012 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Priyantha Wijayatunga
2016-06-01
Full Text Available Measuring strength or degree of statistical dependence between two random variables is a common problem in many domains. Pearson’s correlation coefficient ρ is an accurate measure of linear dependence. We show that ρ is a normalized, Euclidean type distance between joint probability distribution of the two random variables and that when their independence is assumed while keeping their marginal distributions. And the normalizing constant is the geometric mean of two maximal distances; each between the joint probability distribution when the full linear dependence is assumed while preserving respective marginal distribution and that when the independence is assumed. Usage of it is restricted to linear dependence because it is based on Euclidean type distances that are generally not metrics and considered full dependence is linear. Therefore, we argue that if a suitable distance metric is used while considering all possible maximal dependences then it can measure any non-linear dependence. But then, one must define all the full dependences. Hellinger distance that is a metric can be used as the distance measure between probability distributions and obtain a generalization of ρ for the discrete case.
Li, Xianwei; Gao, Huijun; Yu, Xinghuo
2011-10-01
In this paper, the robust global asymptotic stability (RGAS) of generalized static neural networks (SNNs) with linear fractional uncertainties and a constant or time-varying delay is concerned within a novel input-output framework. The activation functions in the model are assumed to satisfy a more general condition than the usually used Lipschitz-type ones. First, by four steps of technical transformations, the original generalized SNN model is equivalently converted into the interconnection of two subsystems, where the forward one is a linear time-invariant system with a constant delay while the feedback one bears the norm-bounded property. Then, based on the scaled small gain theorem, delay-dependent sufficient conditions for the RGAS of generalized SNNs are derived via combining a complete Lyapunov functional and the celebrated discretization scheme. All the results are given in terms of linear matrix inequalities so that the RGAS problem of generalized SNNs is projected into the feasibility of convex optimization problems that can be readily solved by effective numerical algorithms. The effectiveness and superiority of our results over the existing ones are demonstrated by two numerical examples.
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...
A GENERAL-SOLUTION FOR A CLASS OF WEAKLY CONSTRAINED LINEAR-REGRESSION PROBLEMS
TENBERGE, JMF
1991-01-01
This paper contains a globally optimal solution for a class of functions composed of a linear regression function and a penalty function for the sum of squared regression weights. Global optimality is obtained from inequalities rather than from partial derivatives of a Lagrangian function.
Motor learning and general adaptation syndrome Aprendizaje motor y síndrome general de adaptación
Directory of Open Access Journals (Sweden)
E. M. Ordoño
2010-09-01
Full Text Available
This work examines the General Adaptation Syndrome like a suitable framework to explain motor learning processes. Human motor behaviour is viewed like a complex system continuously interacting in the environment. Motor learning is proposed as an adaptation process to the tasks constraints. Training loads and practice load are also considered analogous. Practice is the vehicle of learning, but it must be applied with the enough amount of load to produce an adaptation to a new level of performance. The principles of sport training are presented related to motor learning topics. Common principles are proposed to explain the learning of motor skills, regardless of the level of complexity, and level of the performer, and providing basic criteria that should help to design learning tasks.
Key Words: Motor learning, adaptation, complex systems, training, motor skills.
Este trabajo examina las posibilidades del Síndrome General de Adaptación como un marco de referencia para explicar y predecir los cambios producidos por el Aprendizaje Motor. Se parte de la consideración del ser humano como un sistema complejo en continua interacción con su entorno y el aprendizaje como un proceso de adaptación a las condiciones impuestas por la tarea. Se propone el concepto de carga de práctica análogo al de carga de entrenamiento, considerando que la práctica, vehículo del aprendizaje, debe aplicarse como una estimulación suficiente como para desencadenar en el aprendiz una adaptación a un nuevo nivel de rendimiento. En base a esta propuesta, se relacionan los principios del entrenamiento deportivo con el aprendizaje de habilidades motrices. Se formula una perspectiva teórica que trata de explicar de forma común los procesos de modificación de los patrones motores independientemente del nivel de complejidad, conllevando los mismos
Sugihara, Toshio; Yokoyama, Akihiko; Izena, Atsushi
In this study, adaptive PSS using measurable state variables at generator buses is developed. The PSS parameters are tuned based on eigenvalue analysis for a low-order simple linear model of each generator obtained by identification. The low-order model consists of block diagram of PSS and relationship from output of PSS to input of PSS with limited variables which are identified by least squares method using ΔPe and Δω measured at each generator bus. The identification for the PSS parameter tuning is repeated. The PSS parameters are tuned every second to keep power system stable. Digital simulations for transient stability analysis are carried out for IEEJ WEST 10-machine system model. It is made clear that the stability is improved only when dominant oscillation is identified at generator bus.
Linder, Mats; Ranganathan, Anirudh; Brinck, Tore
2013-02-12
We present a structure-based parametrization of the Linear Interaction Energy (LIE) method and show that it allows for the prediction of absolute protein-ligand binding energies. We call the new model "Adapted" LIE (ALIE) because the α and β coefficients are defined by system-dependent descriptors and do therefore not require any empirical γ term. The best formulation attains a mean average deviation of 1.8 kcal/mol for a diverse test set and depends on only one fitted parameter. It is robust with respect to additional fitting and cross-validation. We compare this new approach with standard LIE by Åqvist and co-workers and the LIE + γSASA model (initially suggested by Jorgensen and co-workers) against in-house and external data sets and discuss their applicabilities.
Directory of Open Access Journals (Sweden)
Luis Gavete
2018-01-01
Full Text Available We apply a 3D adaptive refinement procedure using meshless generalized finite difference method for solving elliptic partial differential equations. This adaptive refinement, based on an octree structure, allows adding nodes in a regular way in order to obtain smooth transitions with different nodal densities in the model. For this purpose, we define an error indicator as stop condition of the refinement, a criterion for choosing nodes with the highest errors, and a limit for the number of nodes to be added in each adaptive stage. This kind of equations often appears in engineering problems such as simulation of heat conduction, electrical potential, seepage through porous media, or irrotational flow of fluids. The numerical results show the high accuracy obtained.
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
Gális, Martin; Moczo, Peter; Kristek, Jozef; Kristekova, Miriam
2010-05-01
We present an adaptive smoothing algorithm for reducing spurious high-frequency oscillations of the slip-rate time histories in the finite-element—traction-at-split-node modeling of dynamic rupture propagation on planar faults with the linear slip-weakening friction law. The algorithm spatially smoothes trial traction on the fault plane. The smoothed value of the trial traction at a grid point and time level is calculated if the slip is larger than 0 simultaneously at the grid point and 8 neighboring grid points on the fault. The smoothed value is a weighted average of the Gaussian-filtered and unfiltered values. The weighting coefficients vary with slip. Numerical tests for different rupture propagation conditions demonstrate that the adaptive smoothing algorithm effectively reduces spurious high-frequency oscillations of the slip-rate time histories without affecting rupture time. The algorithm does not need an artificial damping term in the equation of motion. We implemented the smoothing algorithm in the finite-element part of the 3D hybrid finite-difference—finite-element method. This makes it possible to efficiently simulate dynamic rupture propagation inside a finite-element sub-domain surrounded by the finite-difference sub-domain covering major part of the whole computational domain.
Galis, Martin; Moczo, Peter; Kristek, Jozef; Kristekova, Miriam
2010-01-01
We present an adaptive smoothing algorithm for reducing spurious high-frequency oscillations of the slip-rate time histories in the finite-element (FE)-traction-at-split-node modelling of dynamic rupture propagation on planar faults with the linear slip-weakening friction law. The algorithm spatially smoothes trial traction on the fault plane. The smoothed value of the trial traction at a gridpoint and time level is calculated if the slip is larger than 0 simultaneously at the gridpoint and eight neighbouring gridpoints on the fault. The smoothed value is a weighted average of the Gaussian-filtered and unfiltered values. The weighting coefficients vary with slip. Numerical tests for different rupture propagation conditions demonstrate that the adaptive smoothing algorithm effectively reduces spurious high-frequency oscillations of the slip-rate time histories without affecting rupture time. The algorithm does not need an artificial damping term in the equation of motion. We implemented the smoothing algorithm in the FE part of the 3-D hybrid finite-difference (FD)-FE method. This makes it possible to efficiently simulate dynamic rupture propagation inside a FE subdomain surrounded by the FD subdomain covering major part of the whole computational domain.
Non-linear partial differential equations an algebraic view of generalized solutions
Rosinger, Elemer E
1990-01-01
A massive transition of interest from solving linear partial differential equations to solving nonlinear ones has taken place during the last two or three decades. The availability of better computers has often made numerical experimentations progress faster than the theoretical understanding of nonlinear partial differential equations. The three most important nonlinear phenomena observed so far both experimentally and numerically, and studied theoretically in connection with such equations have been the solitons, shock waves and turbulence or chaotical processes. In many ways, these phenomen
Continuity and general perturbation of the Drazin inverse for closed linear operators
Directory of Open Access Journals (Sweden)
N. Castro González
2002-01-01
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
Generalized linear differential equations in a Banach space : continuous dependence on a parameter
Czech Academy of Sciences Publication Activity Database
Monteiro, G.A.; Tvrdý, Milan
2013-01-01
Roč. 33, č. 1 (2013), s. 283-303 ISSN 1078-0947 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized differential equations * continuous dependence * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics Impact factor: 0.923, year: 2013 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=7615
Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models
DEFF Research Database (Denmark)
Lanne, Markku; Nyberg, Henri
We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use...
Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove
2018-02-01
We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.
Impact of co-channel interference on the performance of adaptive generalized transmit beamforming
Radaydeh, Redha Mahmoud Mesleh
2011-08-01
The impact of co-channel interference on the performance of adaptive generalized transmit beamforming for low-complexity multiple-input single-output (MISO) configuration is investigated. The transmit channels are assumed to be sufficiently separated and undergo Rayleigh fading conditions. Due to the limited space, a single receive antenna is employed to capture desired user transmission. The number of active transmit channels is adjusted adaptively based on statistically unordered and/or ordered instantaneous signal-to-noise ratios (SNRs), where the transmitter has no information about the statistics of undesired signals. The adaptation thresholds are identified to guarantee a target performance level, and the adaptation schemes with enhanced spectral efficiency or power efficiency are studied and their performance are compared under various channels conditions. To facilitate comparison studies, results for the statistics of instantaneous combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. The statistics for combined SNR and combined SINR are then used to quantify various performance measures, considering the impact of non-ideal estimation of the desired user channel state information (CSI) and the randomness in the number of active interferers. Numerical and simulation comparisons for the achieved performance of the adaptation schemes are presented. © 2006 IEEE.
Foam: Multi-Dimensional General Purpose Monte Carlo Generator With Self-Adapting Simplical Grid
Jadach, S.
1999-01-01
A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorit...
Foam: Multi-Dimensional General Purpose Monte Carlo Generator With Self-Adapting Symplectic Grid
Jadach, Stanislaw
2000-01-01
A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorit...
Tapsoba, Jean de Dieu; Lee, Shen-Ming; Wang, Ching-Yun
2014-02-20
Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors, and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. We investigated its finite-sample performance numerically. Our method is illustrated by an application to real data from an HIV vaccine study. Copyright © 2013 John Wiley & Sons, Ltd.
Beynon, R J
1985-01-01
Software for non-linear curve fitting has been written in BASIC to execute on the British Broadcasting Corporation Microcomputer. The program uses the direct search algorithm Pattern-search, a robust algorithm that has the additional advantage of needing specification of the function without inclusion of the partial derivatives. Although less efficient than gradient methods, the program can be readily configured to solve low-dimensional optimization problems that are normally encountered in life sciences. In writing the software, emphasis has been placed upon the 'user interface' and making the most efficient use of the facilities provided by the minimal configuration of this system.
de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição
2014-01-01
OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940
Souza, Juliana Bottoni de; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição
2014-06-01
OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive - while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model - principal component analysis, showed better results in estimating relative risk and quality of fit.
Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L
2014-01-01
Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the
International Nuclear Information System (INIS)
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-01-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper
He, Fei; Xiao, Rendong; Yu, Tingting; Zhang, Xin; Liu, Zhiqiang; Cai, Lin
2015-08-01
The purpose of this study was to use the data on lung cancer in Han Chinese in Fujian province to explore the value of a generalized, linear model and to investigate the impact related to environment factors on lung cancer as well as the independent and interaction effects on the development of lung cancer. SAS 9.2 was used to build a generalized linear model to evaluate the influence factors and interaction of lung cancer on both smokers and non-smokers. Results showed that the relationship of the factors was multiplied. Under the logistic regression analysis, seven risk factors and nine risk factors were noticed in smokers or in non-smokers, respectively. Heavy smokers and lung diseases appeared a positive multiplying effect on smokers while passive smoking and fresh fruits showed positive multiplying effects on non-smokers. The generalized linear models could filter suitable models thus facilitating further research on the interaction between the two. It seemed easy to carry on the comprehensive and rational analysis on related epidemiological data.
Iterative solution of general sparse linear systems on clusters of workstations
Energy Technology Data Exchange (ETDEWEB)
Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
Examining secular trend and seasonality in count data using dynamic generalized linear modelling
DEFF Research Database (Denmark)
Lundbye-Christensen, Søren; Dethlefsen, Claus; Gorst-Rasmussen, Anders
Aims Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation and overdispersion. Methods The model resembles an ordinary time...... series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. Data In the period January 1980 to 1999, 17,989 incidents of acute myocardial infarction were recorded in the county of Northern Jutland, Denmark. Records were...... updated daily. Results The model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicate a good model fit. The analysis with the dynamic model revealed peaks coinciding with influenza epidemics. On average the peak-to-trough ratio is estimated...
General rigid motion correction for computed tomography imaging based on locally linear embedding
Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge
2018-02-01
The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.
Fracchia, F.; Filippi, Claudia; Amovilli, C.
2012-01-01
We propose a new class of multideterminantal Jastrow–Slater wave functions constructed with localized orbitals and designed to describe complex potential energy surfaces of molecular systems for use in quantum Monte Carlo (QMC). Inspired by the generalized valence bond formalism, we elaborate a
M. Nool (Margreet); A. van der Ploeg (Auke)
1997-01-01
textabstractWe study the solution of generalized eigenproblems generated by a model which is used for stability investigation of tokamak plasmas. The eigenvalue problems are of the form $A x = lambda B x$, in which the complex matrices $A$ and $B$ are block tridiagonal, and $B$ is Hermitian positive
Equilibrium arrival times to queues with general service times and non-linear utility functions
DEFF Research Database (Denmark)
Breinbjerg, Jesper
2017-01-01
by a general utility function which is decreasing in the waiting time and service completion time of each customer. Applications of such queueing games range from people choosing when to arrive at a grand opening sale to travellers choosing when to line up at the gate when boarding an airplane. We develop...
The energy and the linear momentum of space-times in general relativity
International Nuclear Information System (INIS)
Schoen, R.; Yau, S.T.
1981-01-01
We extend our previous proof of the positive mass conjecture to allow a more general asymptotic condition proposed by York. Hence we are able to prove that for an isolated physical system, the energy momentum four vector is a future timelike vector unless the system is trivial. Furthermore, we allow singularities of the type of black holes. (orig.)
Aldao, Amelia; Mennin, Douglas S
2012-02-01
Recent models of generalized anxiety disorder (GAD) have expanded on Borkovec's avoidance theory by delineating emotion regulation deficits associated with the excessive worry characteristic of this disorder (see Behar, DiMarco, Hekler, Mohlman, & Staples, 2009). However, it has been difficult to determine whether emotion regulation is simply a useful heuristic for the avoidant properties of worry or an important extension to conceptualizations of GAD. Some of this difficulty may arise from a focus on purported maladaptive regulation strategies, which may be confounded with symptomatic distress components of the disorder (such as worry). We examined the implementation of adaptive regulation strategies by participants with and without a diagnosis of GAD while watching emotion-eliciting film clips. In a between-subjects design, participants were randomly assigned to accept, reappraise, or were not given specific regulation instructions. Implementation of adaptive regulation strategies produced differential effects in the physiological (but not subjective) domain across diagnostic groups. Whereas participants with GAD demonstrated lower cardiac flexibility when implementing adaptive regulation strategies than when not given specific instructions on how to regulate, healthy controls showed the opposite pattern, suggesting they benefited from the use of adaptive regulation strategies. We discuss the implications of these findings for the delineation of emotion regulation deficits in psychopathology. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Wu Xiangjun; Lu Hongtao
2011-01-01
Highlights: → Adaptive generalized function projective lag synchronization (AGFPLS) is proposed. → Two uncertain chaos systems are lag synchronized up to a scaling function matrix. → The synchronization speed is sensitively influenced by the control gains. → The AGFPLS scheme is robust against noise perturbation. - Abstract: In this paper, a novel projective synchronization scheme called adaptive generalized function projective lag synchronization (AGFPLS) is proposed. In the AGFPLS method, the states of two different chaotic systems with fully uncertain parameters are asymptotically lag synchronized up to a desired scaling function matrix. By means of the Lyapunov stability theory, an adaptive controller with corresponding parameter update rule is designed for achieving AGFPLS between two diverse chaotic systems and estimating the unknown parameters. This technique is employed to realize AGFPLS between uncertain Lue chaotic system and uncertain Liu chaotic system, and between Chen hyperchaotic system and Lorenz hyperchaotic system with fully uncertain parameters, respectively. Furthermore, AGFPLS between two different uncertain chaotic systems can still be achieved effectively with the existence of noise perturbation. The corresponding numerical simulations are performed to demonstrate the validity and robustness of the presented synchronization method.
Generalized W1;1-Young Measures and Relaxation of Problems with Linear Growth
Czech Academy of Sciences Publication Activity Database
Baia, M.; Krömer, Stefan; Kružík, Martin
2018-01-01
Roč. 50, č. 1 (2018), s. 1076-1119 ISSN 0036-1410 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GF16-34894L Institutional support: RVO:67985556 Keywords : lower semicontinuity * quasiconvexity * Young measures Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.648, year: 2016 http://library.utia.cas.cz/2018/MTR/kruzik-0487019. pdf
Zhao, Mingbo; Zhang, Zhao; Chow, Tommy W S; Li, Bing
2014-07-01
Dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning, and Linear Discriminant Analysis (LDA) is one of the most popular methods for dimension reduction. However, it only uses labeled samples while neglecting unlabeled samples, which are abundant and can be easily obtained in the real world. In this paper, we propose a new dimension reduction method, called "SL-LDA", by using unlabeled samples to enhance the performance of LDA. The new method first propagates label information from the labeled set to the unlabeled set via a label propagation process, where the predicted labels of unlabeled samples, called "soft labels", can be obtained. It then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction. In this way, the proposed method can preserve more discriminative information, which is preferable when solving the classification problem. We further propose an efficient approach for solving SL-LDA under a least squares framework, and a flexible method of SL-LDA (FSL-LDA) to better cope with datasets sampled from a nonlinear manifold. Extensive simulations are carried out on several datasets, and the results show the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
MRSA model of learning and adaptation: a qualitative study among the general public
Directory of Open Access Journals (Sweden)
Rohde Rodney E
2012-04-01
Full Text Available Abstract Background More people in the US now die from Methicillin Resistant Staphylococcus aureus (MRSA infections than from HIV/AIDS. Often acquired in healthcare facilities or during healthcare procedures, the extremely high incidence of MRSA infections and the dangerously low levels of literacy regarding antibiotic resistance in the general public are on a collision course. Traditional medical approaches to infection control and the conventional attitude healthcare practitioners adopt toward public education are no longer adequate to avoid this collision. This study helps us understand how people acquire and process new information and then adapt behaviours based on learning. Methods Using constructivist theory, semi-structured face-to-face and phone interviews were conducted to gather pertinent data. This allowed participants to tell their stories so their experiences could deepen our understanding of this crucial health issue. Interview transcripts were analysed using grounded theory and sensitizing concepts. Results Our findings were classified into two main categories, each of which in turn included three subthemes. First, in the category of Learning, we identified how individuals used their Experiences with MRSA, to answer the questions: What was learned? and, How did learning occur? The second category, Adaptation gave us insights into Self-reliance, Reliance on others, and Reflections on the MRSA journey. Conclusions This study underscores the critical importance of educational programs for patients, and improved continuing education for healthcare providers. Five specific results of this study can reduce the vacuum that currently exists between the knowledge and information available to healthcare professionals, and how that information is conveyed to the public. These points include: 1 a common model of MRSA learning and adaptation; 2 the self-directed nature of adult learning; 3 the focus on general MRSA information, care and
Time evolution of linear and generalized Heisenberg algebra nonlinear Pöschl-Teller coherent states
Rego-Monteiro, M. A.; Curado, E. M. F.; Rodrigues, Ligia M. C. S.
2017-11-01
We analyze the time evolution of two kinds of coherent states for a particle in a Pöschl-Teller potential. We find a pair of canonically conjugate operators and compare the behavior of their time evolution for both coherent states. The nonlinear ones are more localized. The trajectory in the phase space of the mean values of these two operators is a kind of generalization of the Rose algebraic curves. The new pair of canonically conjugate variables leads to a fourth-order Schrödinger equation which has the same energy spectrum as the Pöschl-Teller system.
General, database-driven fast-feedback system for the Stanford Linear Collider
International Nuclear Information System (INIS)
Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.; Shoaee, H.
1991-05-01
A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Directory of Open Access Journals (Sweden)
Dongyul Lee
2014-01-01
Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Directory of Open Access Journals (Sweden)
Masuda Hiroshi
2017-12-01
Full Text Available It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
International Nuclear Information System (INIS)
Chiou, J-S; Liu, M-T
2008-01-01
As a powerful machine-learning approach to pattern recognition problems, the support vector machine (SVM) is known to easily allow generalization. More importantly, it works very well in a high-dimensional feature space. This paper presents a nonlinear active suspension controller which achieves a high level performance by compensating for actuator dynamics. We use a linear quadratic regulator (LQR) to ensure optimal control of nonlinear systems. An LQR is used to solve the problem of state feedback and an SVM is used to address the question of the estimation and examination of the state. These two are then combined and designed in a way that outputs feedback control. The real-time simulation demonstrates that an active suspension using the combined SVM-LQR controller provides passengers with a much more comfortable ride and better road handling
A Self-adaptive Bit-level Color Image Encryption Algorithm Based on Generalized Arnold Map
Directory of Open Access Journals (Sweden)
Ye Rui-Song
2017-01-01
Full Text Available A self-adaptive bit-level color image encryption algorithm based on generalized Arnold map is proposed. The red, green, blue components of the plain-image with height H and width W are decomposed into 8-bit planes and one three-dimensional bit matrix with size ze H×W×24 is obtained. The generalized Arnold map is used to generate pseudo-random sequences to scramble the resulted three-dimensional bit matrix by sort-based approach. The scrambled 3D bit matrix is then rearranged to be one scrambled color image. Chaotic sequences produced by another generalized Arnold map are used to diffuse the resulted red, green, blue components in a cross way to get more encryption effects. Self-adaptive strategy is adopted in both the scrambling stage and diffusion stage, meaning that the key streams are all related to the content of the plain-image and therefore the encryption algorithm show strong robustness against known/chosen plaintext attacks. Some other performances are carried out, including key space, key sensitivity, histogram, correlation coefficients between adjacent pixels, information entropy and difference attack analysis, etc. All the experimental results show that the proposed image encryption algorithm is secure and effective for practical application.
Directory of Open Access Journals (Sweden)
Martí Casals
Full Text Available BACKGROUND: Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. METHODS: A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. RESULTS: A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64 or Poisson (n = 22. Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%. The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. CONCLUSIONS: During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Directory of Open Access Journals (Sweden)
Matthew P. Dannenberg
2016-08-01
Full Text Available Classifying land cover is perhaps the most common application of remote sensing, yet classification at frequent temporal intervals remains a challenging task due to radiometric differences among scenes, time and budget constraints, and semantic differences among class definitions from different dates. The automatic adaptive signature generalization (AASG algorithm overcomes many of these limitations by locating stable sites between two images and using them to adapt class spectral signatures from a high-quality reference classification to a new image, which mitigates the impacts of radiometric and phenological differences between images and ensures that class definitions remain consistent between the two classifications. We refined AASG to adapt stable site identification parameters to each individual land cover class, while also incorporating improved input data and a random forest classifier. In the Research Triangle region of North Carolina, our new version of AASG demonstrated an improved ability to update existing land cover classifications compared to the initial version of AASG, particularly for low intensity developed, mixed forest, and woody wetland classes. Topographic indices were particularly important for distinguishing woody wetlands from other forest types, while multi-seasonal imagery contributed to improved classification of water, developed, forest, and hay/pasture classes. These results demonstrate both the flexibility of the AASG algorithm and the potential for using it to produce high-quality land cover classifications that can utilize the entire temporal range of the Landsat archive in an automated fashion while maintaining consistent class definitions through time.
International Nuclear Information System (INIS)
Rath, J.; Freeman, A.J.
1975-01-01
A detailed study of the generalized susceptibility chi(vector q) of Sc metal determined from an accurate augmented-plane-wave method calculation of its energy-band structure is presented. The calculations were done by means of a computational scheme for chi(vector q) derived as an extension of the work of Jepsen and Andersen and Lehmann and Taut on the density-of-states problem. The procedure yields simple analytic expressions for the chi(vector q) integral inside a tetrahedral microzone of the Brillouin zone which depends only on the volume of the tetrahedron and the differences of the energies at its corners. Constant-matrix-element results have been obtained for Sc which show very good agreement with the results of Liu, Gupta, and Sinha (but with one less peak) and exhibit a first maximum in chi(vector q) at (0, 0, 0.31) 2π/c [vs (0, 0, 0.35) 2π/c obtained by Liu et al.] which relates very well to dilute rare-earth alloy magnetic ordering at vector q/sub m/ = (0, 0, 0.28) 2π/c and to the kink in the LA-phonon dispersion curve at (0, 0, 0.27) 2π/c. (U.S.)
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Energy Technology Data Exchange (ETDEWEB)
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Lee, Woojoo; Kim, Jeonghwan; Lee, Youngjo; Park, Taesung; Suh, Young Ju
2015-01-01
We explored a hierarchical generalized linear model (HGLM) in combination with dispersion modeling to improve the sib-pair linkage analysis based on the revised Haseman-Elston regression model for a quantitative trait. A dispersion modeling technique was investigated for sib-pair linkage analysis using simulation studies and real data applications. We considered 4 heterogeneous dispersion settings according to a signal-to-noise ratio (SNR) in the various statistical models based on the Haseman-Elston regression model. Our numerical studies demonstrated that susceptibility loci could be detected well by modeling the dispersion parameter appropriately. In particular, the HGLM had better performance than the linear regression model and the ordinary linear mixed model when the SNR is low, i.e., when substantial noise was present in the data. The study shows that the HGLM in combination with dispersion modeling can be utilized to identify multiple markers showing linkage to familial complex traits accurately. Appropriate dispersion modeling might be more powerful to identify markers closest to the major genes which determine a quantitative trait. © 2015 S. Karger AG, Basel.
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei
2014-01-01
The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.
Saccade adaptation as a model of flexible and general motor learning.
Herman, James P; Blangero, Annabelle; Madelain, Laurent; Khan, Afsheen; Harwood, Mark R
2013-09-01
The rapid point-to-point movements of the eyes called saccades are the most commonly made movement by humans, yet differ from nearly every other type of motor output in that they are completed too quickly to be adjusted during their execution by visual feedback. Saccadic accuracy remains quite high over a lifetime despite inevitable changes to the physical structures controlling the eyes, indicating that the oculomotor system actively monitors and adjusts motor commands to achieve consistent behavioral production. Indeed, it seems that beyond the ability to compensate for slow, age-related bodily changes, saccades can be modified following traumatic injury or pathology that affects their production, or in response to more short-term systematic alterations to post-saccadic visual feedback in a laboratory setting. These forms of plasticity rely on the visual detection of accuracy errors by a unified set of mechanisms that support the process known as saccade adaptation. Saccade adaptation has been mostly studied as a phenomenon in its own right, outside of motor learning in general. Here, we highlight the commonalities between eye and arm movement adaptation by reviewing the literature across these fields wherever there are compelling overlapping theories or data. Recent exciting findings are challenging previous interpretations of the underlying mechanisms of saccade adaptation with the incorporation of concepts including prediction, reinforcement and contextual learning. We review the emerging ideas and evidence with particular emphasis on the important contributions made by Josh Wallman in this sphere over the past 15 years. Copyright © 2013 Elsevier Ltd. All rights reserved.
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.
Directory of Open Access Journals (Sweden)
Sun-Ah Hwang
2016-04-01
Full Text Available Although close relationships between the water quality of streams and the types of land use within their watersheds have been well-documented in previous studies, many aspects of these relationships remain unclear. We examined the relationships between urban land use and water quality using data collected from 527 sample points in five major rivers in Korea—the Han, Geum, Nakdong, Younsan, and Seomjin Rivers. Water quality data were derived from samples collected and analyzed under the guidelines of the Korean National Aquatic Ecological Monitoring Program, and land use was quantified using products provided by the Korean Ministry of the Environment, which were used to create a Geographic Information System. Linear models (LMs and generalized additive models were developed to describe the relationships between urban land use and stream water quality, including biological oxygen demand (BOD, total nitrogen (TN, and total phosphorous (TP. A comparison between LMs and non-linear models (in terms of R2 and Akaike’s information criterion values indicated that the general additive models had a better fit and suggested a non-linear relationship between urban land use and water quality. Non-linear models for BOD, TN, and TP showed that each parameter had a similar relationship with urban land use, which had two breakpoints. The non-linear models suggested that the relationships between urban land use and water quality could be categorized into three regions, based on the proportion of urban land use. In moderate urban land use conditions, negative impacts of urban land use on water quality were observed, which confirmed the findings of previous studies. However, the relationships were different in very low urbanization or very high urbanization conditions. Our results could be used to develop strategies for more efficient stream restoration and management, which would enhance water quality based on the degree of urbanization in watersheds. In
Milquez-Sanabria, Harvey; Blanco-Cocom, Luis; Alzate-Gaviria, Liliana
2016-10-03
Agro-industrial wastes are an energy source for different industries. However, its application has not reached small industries. Previous and current research activities performed on the acidogenic phase of two-phase anaerobic digestion processes deal particularly with process optimization of the acid-phase reactors operating with a wide variety of substrates, both soluble and complex in nature. Mathematical models for anaerobic digestion have been developed to understand and improve the efficient operation of the process. At present, lineal models with the advantages of requiring less data, predicting future behavior and updating when a new set of data becomes available have been developed. The aim of this research was to contribute to the reduction of organic solid waste, generate biogas and develop a simple but accurate mathematical model to predict the behavior of the UASB reactor. The system was maintained separate for 14 days during which hydrolytic and acetogenic bacteria broke down onion waste, produced and accumulated volatile fatty acids. On this day, two reactors were coupled and the system continued for 16 days more. The biogas and methane yields and volatile solid reduction were 0.6 ± 0.05 m 3 (kg VS removed ) -1 , 0.43 ± 0.06 m 3 (kg VS removed ) -1 and 83.5 ± 9.8 %, respectively. The model application showed a good prediction of all process parameters defined; maximum error between experimental and predicted value was 1.84 % for alkalinity profile. A linear predictive adaptive model for anaerobic digestion of onion waste in a two-stage process was determined under batch-fed condition. Organic load rate (OLR) was maintained constant for the entire operation, modifying effluent hydrolysis reactor feed to UASB reactor. This condition avoids intoxication of UASB reactor and also limits external buffer addition.
Forkman, Johannes
2017-06-15
Linear mixed-effects models are linear models with several variance components. Models with a single random-effects factor have two variance components: the random-effects variance, i. e., the inter-subject variance, and the residual error variance, i. e., the intra-subject variance. In many applications, it is practice to report variance components as coefficients of variation. The intra- and inter-subject coefficients of variation are the square roots of the corresponding variances divided by the mean. This article proposes methods for computing confidence intervals for intra- and inter-subject coefficients of variation using generalized pivotal quantities. The methods are illustrated through two examples. In the first example, precision is assessed within and between runs in a bioanalytical method validation. In the second example, variation is estimated within and between main plots in an agricultural split-plot experiment. Coverage of generalized confidence intervals is investigated through simulation and shown to be close to the nominal value.
Iwasaki, Yuichi; Brinkman, Stephen F
2015-04-01
Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. © 2015 SETAC.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J
2014-12-10
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. Copyright © 2014 John Wiley & Sons, Ltd.
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Koss, Hans; Rance, Mark; Palmer, Arthur G
2017-01-01
Exploration of dynamic processes in proteins and nucleic acids by spin-locking NMR experiments has been facilitated by the development of theoretical expressions for the R 1 ρ relaxation rate constant covering a variety of kinetic situations. Herein, we present a generalized approximation to the chemical exchange, R ex , component of R 1 ρ for arbitrary kinetic schemes, assuming the presence of a dominant major site population, derived from the negative reciprocal trace of the inverse Bloch-McConnell evolution matrix. This approximation is equivalent to first-order truncation of the characteristic polynomial derived from the Bloch-McConnell evolution matrix. For three- and four-site chemical exchange, the first-order approximations are sufficient to distinguish different kinetic schemes. We also introduce an approach to calculate R 1 ρ for linear N-site schemes, using the matrix determinant lemma to reduce the corresponding 3N×3N Bloch-McConnell evolution matrix to a 3×3 matrix. The first- and second order-expansions of the determinant of this 3×3 matrix are closely related to previously derived equations for two-site exchange. The second-order approximations for linear N-site schemes can be used to obtain more accurate approximations for non-linear N-site schemes, such as triangular three-site or star four-site topologies. The expressions presented herein provide powerful means for the estimation of R ex contributions for both low (CEST-limit) and high (R 1 ρ -limit) radiofrequency field strengths, provided that the population of one state is dominant. The general nature of the new expressions allows for consideration of complex kinetic situations in the analysis of NMR spin relaxation data. Copyright © 2016 Elsevier Inc. All rights reserved.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Directory of Open Access Journals (Sweden)
Biman Jana
2016-08-01
Full Text Available A structure-based model of myosin motor is built in the same spirit of our early work for kinesin-1 and Ncd towards physical understanding of its mechanochemical cycle. We find a structural adaptation of the motor head domain in post-powerstroke state that signals faster ADP release from it compared to the same from the motor head in the pre-powerstroke state. For dimeric myosin, an additional forward strain on the trailing head, originating from the postponed powerstroke state of the leading head in the waiting state of myosin, further increases the rate of ADP release. This coordination between the two heads is the essence of the processivity of the cycle. Our model provides a structural description of the powerstroke step of the cycle as an allosteric transition of the converter domain in response to the Pi release. Additionally, the variation in structural elements peripheral to catalytic motor domain is the deciding factor behind diverse directionalities of myosin motors (myosin V & VI. Finally, we observe that there are general rules for functional molecular motors across the different families. Allosteric structural adaptation of the catalytic motor head in different nucleotide states is crucial for mechanochemistry. Strain-mediated coordination between motor heads is essential for processivity and the variation of peripheral structural elements is essential for their diverse functionalities.
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Pekkanen, Jami; Lappi, Otto
2017-12-18
We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.
International Nuclear Information System (INIS)
Shimizu, Yoshiaki
1991-01-01
In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)
Directory of Open Access Journals (Sweden)
Tao eWang
2015-03-01
Full Text Available The generalized linear mixed model (GLMM is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via `proc nlmixed' and `proc glimmix' in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC.
Tsai, Miao-Yu
2015-03-01
The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yan, Fang-Rong; Huang, Yuan; Liu, Jun-Lin; Lu, Tao; Lin, Jin-Guan
2013-01-01
This article provides a fully bayesian approach for modeling of single-dose and complete pharmacokinetic data in a population pharmacokinetic (PK) model. To overcome the impact of outliers and the difficulty of computation, a generalized linear model is chosen with the hypothesis that the errors follow a multivariate Student t distribution which is a heavy-tailed distribution. The aim of this study is to investigate and implement the performance of the multivariate t distribution to analyze population pharmacokinetic data. Bayesian predictive inferences and the Metropolis-Hastings algorithm schemes are used to process the intractable posterior integration. The precision and accuracy of the proposed model are illustrated by the simulating data and a real example of theophylline data.
Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W
2018-03-05
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.
Noh, Maengseok; Lee, Youngjo; Oh, Seungyoung; Chu, Chaeshin; Gwack, Jin; Youn, Seung-Ki; Cho, Shin Hyeong; Lee, Won Ja; Huh, Sun
2012-12-01
The spatial and temporal correlations were estimated to determine Plasmodium vivax malarial transmission pattern in Korea from 2001-2011 with the hierarchical generalized linear model. Malaria cases reported to the Korea Centers for Disease Control and Prevention from 2001 to 2011 were analyzed with descriptive statistics and the incidence was estimated according to age, sex, and year by the hierarchical generalized linear model. Spatial and temporal correlation was estimated and the best model was selected from nine models. Results were presented as diseases map according to age and sex. The incidence according to age was highest in the 20-25-year-old group (244.52 infections/100,000). Mean ages of infected males and females were 31.0 years and 45.3 years with incidences 7.8 infections/100,000 and 7.1 infections/100,000 after estimation. The mean month for infection was mid-July with incidence 10.4 infections/100,000. The best-fit model showed that there was a spatial and temporal correlation in the malarial transmission. Incidence was very low or negligible in areas distant from the demilitarized zone between Republic of Korea and Democratic People's Republic of Korea (North Korea) if the 20-29-year-old male group was omitted in the diseases map. Malarial transmission in a region in Korea was influenced by the incidence in adjacent regions in recent years. Since malaria in Korea mainly originates from mosquitoes from North Korea, there will be continuous decrease if there is no further outbreak in North Korea.
International Development Research Centre (IDRC) Digital Library (Canada)
. Dar es Salaam. Durban. Bloemfontein. Antananarivo. Cape Town. Ifrane ... program strategy. A number of CCAA-supported projects have relevance to other important adaptation-related themes such as disaster preparedness and climate.
Content-adaptive pentary steganography using the multivariate generalized Gaussian cover model
Sedighi, Vahid; Fridrich, Jessica; Cogranne, Rémi
2015-03-01
The vast majority of steganographic schemes for digital images stored in the raster format limit the amplitude of embedding changes to the smallest possible value. In this paper, we investigate the possibility to further improve the empirical security by allowing the embedding changes in highly textured areas to have a larger amplitude and thus embedding there a larger payload. Our approach is entirely model driven in the sense that the probabilities with which the cover pixels should be changed by a certain amount are derived from the cover model to minimize the power of an optimal statistical test. The embedding consists of two steps. First, the sender estimates the cover model parameters, the pixel variances, when modeling the pixels as a sequence of independent but not identically distributed generalized Gaussian random variables. Then, the embedding change probabilities for changing each pixel by 1 or 2, which can be transformed to costs for practical embedding using syndrome-trellis codes, are computed by solving a pair of non-linear algebraic equations. Using rich models and selection-channel-aware features, we compare the security of our scheme based on the generalized Gaussian model with pentary versions of two popular embedding algorithms: HILL and S-UNIWARD.
Directory of Open Access Journals (Sweden)
Jie Wang
2017-03-01
Full Text Available Deep convolutional neural networks (CNNs have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general features from pre-trained deep CNNs for remote scene classification. These two architectures suggest two directions for improvement. First, before the pre-trained deep CNNs, we design a linear PCA network (LPCANet to synthesize spatial information of remote sensing images in each spectral channel. This design shortens the spatial “distance” of target and source datasets for pre-trained deep CNNs. Second, we introduce quaternion algebra to LPCANet, which further shortens the spectral “distance” between remote sensing images and images used to pre-train deep CNNs. With five well-known pre-trained deep CNNs, experimental results on three independent remote sensing datasets demonstrate that our proposed framework obtains state-of-the-art results without fine-tuning and feature fusing. This paper also provides baseline for transferring fresh pretrained deep CNNs to other remote sensing tasks.
Loley, Christina; König, Inke R; Hothorn, Ludwig; Ziegler, Andreas
2013-12-01
The analysis of genome-wide genetic association studies generally starts with univariate statistical tests of each single-nucleotide polymorphism. The standard approach is the Cochran-Armitage trend test or its logistic regression equivalent although this approach can lose considerable power if the underlying genetic model is not additive. An alternative is the MAX test, which is robust against the three basic modes of inheritance. Here, the asymptotic distribution of the MAX test is derived using the generalized linear model together with the Delta method and multiple contrasts. The approach is applicable to binary, quantitative, and survival traits. It may be used for unrelated individuals, family-based studies, and matched pairs. The approach provides point and interval effect estimates and allows selecting the most plausible genetic model using the minimum P-value. R code is provided. A Monte-Carlo simulation study shows that the asymptotic MAX test framework meets type I error levels well, has good power, and good model selection properties for minor allele frequencies ≥0.3. Pearson's χ(2)-test is superior for lower minor allele frequencies with low frequencies for the rare homozygous genotype. In these cases, the model selection procedure should be used with caution. The use of the MAX test is illustrated by reanalyzing findings from seven genome-wide association studies including case-control, matched pairs, and quantitative trait data.
Appukuttan, DP; Vinayagavel, M; Balasundaram, A; Damodaran, LK; Shivaraman, P; Gunasshegaran, K
2015-01-01
Background: Oral health has an impact on quality of life hence for research purpose validation of a Tamil version of General Oral Health Assessment Index would enable it to be used as a valuable tool among Tamil speaking population. Aim: In this study, we aimed to assess the psychometric properties of translated Tamil version of General Oral Health Assessment Index (GOHAI-Tml). Subjects and Methods: Linguistic adaptation involved forward and backward blind translation process. Reliability was analyzed using test-retest, Cronbach alpha, and split half reliability. Inter-item and item-total correlation were evaluated using Spearman rank correlation. Convenience sampling was done, and 265 consecutive patients aged 20–70 years attending the outpatient department were recruited. Subjects were requested to fill a self-reporting questionnaire along with Tamil GOHAI version. Clinical examination was done on the same visit. Concurrent validity was measured by assessing the relationship between GOHAI scores and self-perceived oral health and general health status, satisfaction with oral health, need for dental treatment and esthetic satisfaction. Discriminant validity was evaluated by comparing the GOHAI scores with the objectively assessed clinical parameters. Exploratory factor analysis was done to examine the factor structure. Results: Mean GOHAI-Tml was 52.7 (6.8, range 22–60, median 54). The mean number of negative impacts was 2 (2.4, range 0–11, median 1). The Spearman rank correlation for test-retest ranged from 0.8 to 0.9 (P psychometric properties, so that it can be used as an efficient tool in identifying the impact of oral health on quality of life among the Tamil speaking population. PMID:27057380
Directory of Open Access Journals (Sweden)
Jalalifar Mehran
2007-01-01
Full Text Available In this paper using adaptive backstepping approach an adaptive rotor flux observer which provides stator and rotor resistances estimation simultaneously for induction motor used in series hybrid electric vehicle is proposed. The controller of induction motor (IM is designed based on input-output feedback linearization technique. Combining this controller with adaptive backstepping observer the system is robust against rotor and stator resistances uncertainties. In additional, mechanical components of a hybrid electric vehicle are called from the Advanced Vehicle Simulator Software Library and then linked with the electric motor. Finally, a typical series hybrid electric vehicle is modeled and investigated. Various tests, such as acceleration traversing ramp, and fuel consumption and emission are performed on the proposed model of a series hybrid vehicle. Computer simulation results obtained, confirm the validity and performance of the proposed IM control approach using for series hybrid electric vehicle.
Directory of Open Access Journals (Sweden)
Xiuchun Li
2013-01-01
Full Text Available When the parameters of both drive and response systems are all unknown, an adaptive sliding mode controller, strongly robust to exotic perturbations, is designed for realizing generalized function projective synchronization. Sliding mode surface is given and the controlled system is asymptotically stable on this surface with the passage of time. Based on the adaptation laws and Lyapunov stability theory, an adaptive sliding controller is designed to ensure the occurrence of the sliding motion. Finally, numerical simulations are presented to verify the effectiveness and robustness of the proposed method even when both drive and response systems are perturbed with external disturbances.
McTaggart-Cowan, Helen M; O'Cathain, Alicia; Tsuchiya, Aki; Brazier, John E
2012-04-01
To understand the effect of an adaptation exercise (AE) on general population values for rheumatoid arthritis (RA) states. A sequential mixed methods design was employed: an analysis of a dataset to develop RA states for valuing in later phases of the study; a qualitative interview study with members of the general population to identify how an AE affected valuing of the RA states and to help design a questionnaire for the final phase; and a quantitative quasi-experimental study to identify factors that influence change in values after being informed about adaptation. Three RA states were developed using Rasch and cluster analyses. Participants in the qualitative phase identified a range of ways in which information about adaptation affected their values. For example, they realized they could adapt to RA because their family and friends who had RA, or similar conditions, could cope. A 25-item questionnaire was developed and used during the final phase to identify that younger and healthier individuals were more likely to increase their values after being informed about disease adaptation. The qualitative findings were revisited and found to support the quantitative results. This approach facilitated understanding of whether and how an AE affected valuing of health states. Each phase affected the next phase of the study, leading to the conclusion that general population respondents who have little experience of disease will likely increase their health state values after being informed about adaptation because they understand that they could cope with the disease.
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Hughes, Vanessa K; Langlois, Neil E I
2010-12-01
Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This
Foam Multi-Dimensional General Purpose Monte Carlo Generator With Self-Adapting Symplectic Grid
Jadach, Stanislaw
2000-01-01
A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorithm is capable of dealing with distributions featuring strong correlation among variables (like ridge along diagonal). The presented algorithm is complementary to others known and commonly used in the Monte Carlo event generators. It is, in principle, more effective then any other one for distributions with very complicated patterns of singularities - the price to pay is that it is memory-hungry. It is therefore aimed at a small number of integration dimensions (<10). It should be combined with other methods for higher ...
Foam: Multi-dimensional general purpose Monte Carlo generator with self-adapting simplical grid
Jadach, S.
2000-08-01
A new general purpose Monte Carlo event generator with self-adapting grid consisting of simplices is described. In the process of initialization, the simplex-shaped cells divide into daughter subcells in such a way that: (a) cell density is biggest in areas where integrand is peaked, (b) cells elongate themselves along hyperspaces where integrand is enhanced/singular. The grid is anisotropic, i.e. memory of the axes directions of the primary reference frame is lost. In particular, the algorithm is capable of dealing with distributions featuring strong correlation among variables (like ridge along diagonal). The presented algorithm is complementary to others known and commonly used in the Monte Carlo event generators. It is, in principle, more effective than any other one for distributions with very complicated patterns of singularities - the price to pay is that it is memory-hungry. It is therefore aimed at a small number of integration dimensions ( <10 ). It should be combined with other methods for higher dimension. The source code in Fortran 77 is available from http://home.cern.ch/ hadach.
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Energy Technology Data Exchange (ETDEWEB)
Klassen, Mikhail; Pudritz, Ralph E. [Department of Physics and Astronomy, McMaster University 1280 Main Street W, Hamilton, ON L8S 4M1 (Canada); Kuiper, Rolf [Max Planck Institute for Astronomy Königstuhl 17, D-69117 Heidelberg (Germany); Peters, Thomas [Institut für Computergestützte Wissenschaften, Universität Zürich Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Banerjee, Robi; Buntemeyer, Lars, E-mail: klassm@mcmaster.ca [Hamburger Sternwarte, Universität Hamburg Gojenbergsweg 112, D-21029 Hamburg (Germany)
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Directory of Open Access Journals (Sweden)
Al-Jumah KA
2014-03-01
Full Text Available Khalaf Ali Al-Jumah,1 Mohamed Azmi Hassali,2 Ibrahem Al-Zaagi31Al Amal Psychiatric Hospital, Riyadh, Saudi Arabia; 2School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia; 3King Saud Medical City, Riyadh, Saudi ArabiaObjective: The aim of this study was to cross-culturally adapt the Armando Patient Satisfaction Questionnaire into Arabic and validate its use in the general population.Methods: The translation was conducted based on the principles of the most widely used model in questionnaire translation, namely Brisling’s back-translation model. A written authorization allowing translation into Arabic was obtained from the original author. The Arabic version of the questionnaire was distributed to 480 participants to evaluate construct validity. Statistical Package for Social Sciences version 17.0 for Windows was used for the statistical analysis.Results: The response rate of this study was 96%; most of the respondents (52.5% were female. Internal consistency was assessed using Cronbach’s α, which showed that this questionnaire provides a high reliability coefficient (reaching 0.9299 and a high degree of consistency and thus can be relied upon in future patient satisfaction research.Keywords: cross-cultural, Arabic, survey
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
International Development Research Centre (IDRC) Digital Library (Canada)
Nairobi, Kenya. 28 Adapting Fishing Policy to Climate Change with the Aid of Scientific and Endogenous Knowledge. Cap Verde, Gambia,. Guinea, Guinea Bissau,. Mauritania and Senegal. Environment and Development in the Third World. (ENDA-TM). Dakar, Senegal. 29 Integrating Indigenous Knowledge in Climate Risk ...
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Cyril R Pernet
2014-01-01
Full Text Available This tutorial presents several misconceptions related to the use the General Linear Model (GLM in functional Magnetic Resonance Imaging (fMRI. The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab. In particular, I address issues related to (i model parameterization (modelling baseline or null events and scaling of the design matrix; (ii hemodynamic modelling using basis functions, and (iii computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why 'baseline' should not be modelled (model over-parameterization, and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the haemodynamic model (haemodynamic function only or using derivatives can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analysis and give some recommendations.
Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.
Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N
2009-02-01
Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p Myanmar, Lao PDR, Thailand and Yunnan Province, China). These countries also have actively and historically mined Zn, Pb, and Cu deposits where Cd is likely to be a potential hazard if un-controlled discharge/runoff enters areas of rice cultivation. As such, it is envisaged that the Irr-Cad model could be applied for Cd hazard assessment and effectively form the basis of intervention options and policy decisions to protect public health, livelihoods, and export security.
Huang, Zhibin; Mayr, Nina A; Yuh, William T; Wang, Jian Z; Lo, Simon S
2013-06-01
Using the generalized linear-quadratic (gLQ) model, we reanalyzed published dosimetric data from patients with radiation myelopathy (RM) after reirradiation with spinal stereotactic body radiotherapy (SBRT). Based on a published study, the thecal sac dose of five RM patients and 14 no RM patients were reanalyzed using gLQ model. Maximum point doses (Pmax) in the thecal sac were obtained. The gLQ-based biological effective doses were calculated and normalized (nBEDgLQ) to a 2-Gy equivalent dose (nBEDgLQ = Gy2/2_gLQ). The initial conventional radiotherapy dose, converted to Gy2/2_gLQ, was added. Total (conventional radiotherapy + SBRT) mean Pmax nBEDgLQ was lower in no RM than RM patients: 59.2 Gy2/2_gLQ (range: 37.5-101.9) versus 94.8 Gy2/2_gLQ (range: 70.2-133.4) (p = 0.0016). The proportion of total Pmax nBEDgLQ accounted for by the SBRT Pmax nBEDgLQ was higher for RM patients. No RMs were seen below a total spinal cord nBEDgLQ of 70 Gy2/2_gLQ. The gLQ-derived spinal cord tolerance for total nBEDgLQ was 70 Gy2/2_gLQ.
Yan, Qi; Tiwari, Hemant K; Yi, Nengjun; Gao, Guimin; Zhang, Kui; Lin, Wan-Yu; Lou, Xiang-Yang; Cui, Xiangqin; Liu, Nianjun
2015-01-01
The existing methods for identifying multiple rare variants underlying complex diseases in family samples are underpowered. Therefore, we aim to develop a new set-based method for an association study of dichotomous traits in family samples. We introduce a framework for testing the association of genetic variants with diseases in family samples based on a generalized linear mixed model. Our proposed method is based on a kernel machine regression and can be viewed as an extension of the sequence kernel association test (SKAT and famSKAT) for application to family data with dichotomous traits (F-SKAT). Our simulation studies show that the original SKAT has inflated type I error rates when applied directly to family data. By contrast, our proposed F-SKAT has the correct type I error rate. Furthermore, in all of the considered scenarios, F-SKAT, which uses all family data, has higher power than both SKAT, which uses only unrelated individuals from the family data, and another method, which uses all family data. We propose a set-based association test that can be used to analyze family data with dichotomous phenotypes while handling genetic variants with the same or opposite directions of effects as well as any types of family relationships. © 2015 S. Karger AG, Basel.
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pmodels are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Planeta, Josef; Karásek, Pavel; Hohnová, Barbora; Sťavíková, Lenka; Roth, Michal
2012-08-10
Biphasic solvent systems composed of an ionic liquid (IL) and supercritical carbon dioxide (scCO(2)) have become frequented in synthesis, extractions and electrochemistry. In the design of related applications, information on interphase partitioning of the target organics is essential, and the infinite-dilution partition coefficients of the organic solutes in IL-scCO(2) systems can conveniently be obtained by supercritical fluid chromatography. The data base of experimental partition coefficients obtained previously in this laboratory has been employed to test a generalized predictive model for the solute partition coefficients. The model is an amended version of that described before by Hiraga et al. (J. Supercrit. Fluids, in press). Because of difficulty of the problem to be modeled, the model involves several different concepts - linear solvation energy relationships, density-dependent solvent power of scCO(2), regular solution theory, and the Flory-Huggins theory of athermal solutions. The model shows a moderate success in correlating the infinite-dilution solute partition coefficients (K-factors) in individual IL-scCO(2) systems at varying temperature and pressure. However, larger K-factor data sets involving multiple IL-scCO(2) systems appear to be beyond reach of the model, especially when the ILs involved pertain to different cation classes. Copyright © 2012 Elsevier B.V. All rights reserved.
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
High-fidelity linear time-invariant model of a smart rotor with adaptive trailing edge flaps
DEFF Research Database (Denmark)
Bergami, Leonardo; Hansen, Morten Hartvig
2017-01-01
A high-fidelity linear time-invariant model of the aero-servo-elastic response of a wind turbine with trailing-edge flaps is presented and used for systematic tuning of an individual flap controller. The model includes the quasi-steady aerodynamic effects of trailing-edge flaps on wind turbine bl...
Radaydeh, Redha Mahmoud Mesleh
2010-09-01
The impact of co-channel interference and nonideal estimation of the desired user channel state information (CSI) on the performance of an adaptive threshold-based generalized transmit diversity for low-complexity multiple-input single-output configuration is investigated. The adaptation to channel conditions is assumed to be based on the desired user CSI, and the number of active transmit antennas is adjusted accordingly to guarantee predetermined target performance. To facilitate comparisons between different adaptation schemes, new analytical results for the statistics of combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. Selected numerical results are presented to validate the analytical development and to compare the outage performance of the considered adaptation schemes. © 2010 IEEE.
Boulehouache, Soufiane; Maamri, Ramdane; Sahnoun, Zaidi
2015-01-01
The Pedagogical Agents (PAs) for Mobile Learning (m-learning) must be able not only to adapt the teaching to the learner knowledge level and profile but also to ensure the pedagogical efficiency within unpredictable changing runtime contexts. Therefore, to deal with this issue, this paper proposes a Context-aware Self-Adaptive Fractal Component…
Dias, Sofia; Sutton, Alex J; Ades, A E; Welton, Nicky J
2013-07-01
We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats.
Cross-Cultural adaptation of the General Functioning Scale of the Family.
Pires, Thiago; Assis, Simone Gonçalves de; Avanci, Joviana Quintes; Pesce, Renata Pires
2016-06-27
To describe the process of cross-cultural adaptation of the General Functioning Scale of the Family, a subscale of the McMaster Family Assessment Device, for the Brazilian population. The General Functioning Scale of the Family was translated into Portuguese and administered to 500 guardians of children in the second grade of elementary school in public schools of Sao Gonçalo, Rio de Janeiro, Southeastern Brazil. The types of equivalences investigated were: conceptual and of items, semantic, operational, and measurement. The study involved discussions with experts, translations and back-translations of the instrument, and psychometric assessment. Reliability and validity studies were carried out by internal consistency testing (Cronbach's alpha), Guttman split-half correlation model, Pearson correlation coefficient, and confirmatory factor analysis. Associations between General Functioning of the Family and variables theoretically associated with the theme (father's or mother's drunkenness and violence between parents) were estimated by odds ratio. Semantic equivalence was between 90.0% and 100%. Cronbach's alpha ranged from 0.79 to 0.81, indicating good internal consistency of the instrument. Pearson correlation coefficient ranged between 0.303 and 0.549. Statistical association was found between the general functioning of the family score and the theoretically related variables, as well as good fit quality of the confirmatory analysis model. The results indicate the feasibility of administering the instrument to the Brazilian population, as it is easy to understand and a good measurement of the construct of interest. Descrever o processo de adaptação transcultural da escala de Funcionamento Geral da Família, subescala da McMaster Family Assessment Device, para a população brasileira. A escala de Funcionamento Geral da Família, original no idioma inglês, foi traduzida para o português e aplicada a 500 responsáveis de crianças do segundo ano do ensino
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2017-01-01
In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...
Directory of Open Access Journals (Sweden)
Ge Shuzhi S
2010-12-01
Full Text Available Abstract Background Near-infrared spectroscopy (NIRS is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map.
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
Wang, Jian Z; Huang, Zhibin; Lo, Simon S; Yuh, William T C; Mayr, Nina A
2010-07-07
Conventional radiation therapy for cancer usually consists of multiple treatments (called fractions) with low doses of radiation. These dose schemes are planned with the guidance of the linear-quadratic (LQ) model, which has been the most prevalent model for designing dose schemes in radiation therapy. The high-dose fractions used in newer advanced radiosurgery, stereotactic radiation therapy, and high-dose rate brachytherapy techniques, however, cannot be accurately calculated with the traditional LQ model. To address this problem, we developed a generalized LQ (gLQ) model that encompasses the entire range of possible dose delivery patterns and derived formulas for special radiotherapy schemes. We show that the gLQ model can naturally derive the traditional LQ model for low-dose and low-dose rate irradiation and the target model for high-dose irradiation as two special cases of gLQ. LQ and gLQ models were compared with published data obtained in vitro from Chinese hamster ovary cells across a wide dose range [0 to approximately 11.5 gray (Gy)] and from animals with dose fractions up to 13.5 Gy. The gLQ model provided consistent interpretation across the full dose range, whereas the LQ model generated parameters that depended on dose range, fitted only data with doses of 3.25 Gy or less, and failed to predict high-dose responses. Therefore, the gLQ model is useful for analyzing experimental radiation response data across wide dose ranges and translating common low-dose clinical experience into high-dose radiotherapy schemes for advanced radiation treatments.
Diegelmann, Mona; Jansen, Carl-Philipp; Wahl, Hans-Werner; Schilling, Oliver K; Schnabel, Eva-Luisa; Hauer, Klaus
2017-04-18
Physical activity (PA) may counteract depressive symptoms in nursing home (NH) residents considering biological, psychological, and person-environment transactional pathways. Empirical results, however, have remained inconsistent. Addressing potential shortcomings of previous research, we examined the effect of a whole-ecology PA intervention program on NH residents' depressive symptoms using generalized linear mixed-models (GLMMs). We used longitudinal data from residents of two German NHs who were included without any pre-selection regarding physical and mental functioning (n = 163, M age = 83.1, 53-100 years; 72% female) and assessed on four occasions each three months apart. Residents willing to participate received a 12-week PA training program. Afterwards, the training was implemented in weekly activity schedules by NH staff. We ran GLMMs to account for the highly skewed depressive symptoms outcome measure (12-item Geriatric Depression Scale-Residential) by using gamma distribution. Exercising (n = 78) and non-exercising residents (n = 85) showed a comparable level of depressive symptoms at pretest. For exercising residents, depressive symptoms stabilized between pre-, posttest, and at follow-up, whereas an increase was observed for non-exercising residents. The intervention group's stabilization in depressive symptoms was maintained at follow-up, but increased further for non-exercising residents. Implementing an innovative PA intervention appears to be a promising approach to prevent the increase of NH residents' depressive symptoms. At the data-analytical level, GLMMs seem to be a promising tool for intervention research at large, because all longitudinally available data points and non-normality of outcome data can be considered.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes
Lombardo, Luigi; Castro-Camilo, Daniela; Mai, Martin; Jie, Dou; Huser, Raphaël
2017-04-01
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented output. In this contribution, we generate a SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyze the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05 which were then used as predictors for landslide susceptibility. In addition, we combined shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to reduce the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to identify the geomorphic features that primarily control the SU-based susceptibility within the test area while producing high predictive performances. Level 4 validation procedures were implemented to assess uncertainty and quality of the models through a set of 500 randomly generated replicates.
Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P
2014-08-01
Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.
Camilo, Daniela Castro
2017-08-30
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Directory of Open Access Journals (Sweden)
Minar Naomi Damanik-Ambarita
2016-07-01
Full Text Available The biotic integrity of the Guayas River basin in Ecuador is at environmental risk due to extensive anthropogenic activities. We investigated the potential impacts of hydromorphological and chemical variables on biotic integrity using macroinvertebrate-based bioassessments. The bioassessment methods utilized included the Biological Monitoring Working Party adapted for Colombia (BMWP-Col and the average score per taxon (ASPT, via an extensive sampling campaign that was completed throughout the river basin at 120 sampling sites. The BMWP-Col classification ranged from very bad to good, and from probable severe pollution to clean water based on the ASPT scores. Generalized linear models (GLMs and sensitivity analysis were used to relate the bioassessment index to hydromorphological and chemical variables. It was found that elevation, nitrate-N, sediment angularity, logs, presence of macrophytes, flow velocity, turbidity, bank shape, land use and chlorophyll were the key environmental variables affecting the BMWP-Col. From the analyses, it was observed that the rivers at the upstream higher elevations of the river basin were in better condition compared to lowland systems and that a higher flow velocity was linked to a better BMWP-Col score. The nitrate concentrations were very low in the entire river basin and did not relate to a negative impact on the macroinvertebrate communities. Although the results of the models provided insights into the ecosystem, cross fold model development and validation also showed that there was a level of uncertainty in the outcomes. However, the results of the models and sensitivity analysis can support water management actions to determine and focus on alterable variables, such as the land use at different elevations, monitoring of nitrate and chlorophyll concentrations, macrophyte presence, sediment transport and bank stability.
DEFF Research Database (Denmark)
Yang, Z.; Izadi-Zamanabadi, Roozbeh; Blanke, M.
2000-01-01
Based on the model-matching strategy, an adaptive control reconfiguration method for a class of nonlinear control systems is proposed by using the multiple-model scheme. Instead of requiring the nominal and faulty nonlinear systems to match each other directly in some proper sense, three sets...... of LTI models are employed to approximate the faulty, reconfigured and nominal nonlinear systems respectively with respect to the on-line information of the operating system, and a set of compensating modules are proposed and designed so as to make the local LTI model approximating to the reconfigured...
Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.
Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F
2009-11-28
Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.
Wang, Ji; Pi, Yangjun; Hu, Yumei; Zhu, Zhencai; Zeng, Lingbin
2017-11-01
In this paper, a new motion and vibration synthesized control system-a linear quadratic regulator/strain rate feedback controller (LQR/SRF) with adaptive disturbance attenuation is presented for a multi flexible-link mechanism subjected to uncertain harmonic disturbances with arbitrary frequencies and unknown magnitudes. In the proposed controller, nodal strain rates are introduced into the model of the multi flexible-link mechanism, based upon which a synthesized LQR controller where both rigid-body motion and elastic deformation are considered is designed. The uncertain harmonic disturbances would be canceled in the feedback loop by its approximated value which is computed online via an adaptive update law. Asymptotic stability of the closed-loop system is proved by the Lyapunov analysis. The effectiveness of the proposed controller is shown via simulation.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Rahmim, Arman; Zhou, Yun; Tang, Jing; Lu, Lijun; Sossi, Vesna; Wong, Dean F.
2012-01-01
Due to high noise levels in the voxel kinetics, development of reliable parametric imaging algorithms remains as one of most active areas in dynamic brain PET imaging, which in the vast majority of cases involves receptor/transporter studies with reversibly binding tracers. As such, the focus of this work has been to develop a novel direct 4D parametric image reconstruction scheme for such tracers. Based on a relative equilibrium (RE) graphical analysis formulation (Zhou et al., 2009b), we developed a closed-form 4D EM algorithm to directly reconstruct distribution volume (DV) parametric images within a plasma input model, as well as DV ratio (DVR) images within a reference tissue model scheme (wherein an initial reconstruction was used to estimate the reference tissue time-activity-curves). A particular challenge with the direct 4D EM formulation is that the intercept parameters in graphical (linearized) analysis of reversible tracers (e.g. Logan or RE analysis) are commonly negative (unlike for irreversible tracers; e.g. using Patlak analysis). Subsequently, we focused our attention on the AB-EM algorithm, derived by Byrne (1998) to allow inclusion of prior information about the lower (A) and upper (B) bounds for image values. We then generalized this algorithm to the 4D EM framework thus allowing negative intercept parameters. Furthermore, our 4D AB-EM algorithm incorporated, and emphasized the use of spatially varying lower bounds to achieve enhanced performance. As validation, the means of parameters estimated from 55 human 11C-raclopride dynamic PET studies were used for extensive simulations using a mathematical brain phantom. Images were reconstructed using conventional indirect as well as proposed direct parametric imaging methods. Noise vs. bias quantitative measurements were performed in various regions of the brain. Direct 4D EM reconstruction resulted in notable qualitative and quantitative accuracy improvements (over 35% noise reduction, with matched
Directory of Open Access Journals (Sweden)
Kyle A McQuisten
2009-10-01
Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are
Zeng, Ping; Zhao, Yang; Li, Hongliang; Wang, Ting; Chen, Feng
2015-04-22
In many medical studies the likelihood ratio test (LRT) has been widely applied to examine whether the random effects variance component is zero within the mixed effects models framework; whereas little work about likelihood-ratio based variance component test has been done in the generalized linear mixed models (GLMM), where the response is discrete and the log-likelihood cannot be computed exactly. Before applying the LRT for variance component in GLMM, several difficulties need to be overcome, including the computation of the log-likelihood, the parameter estimation and the derivation of the null distribution for the LRT statistic. To overcome these problems, in this paper we make use of the penalized quasi-likelihood algorithm and calculate the LRT statistic based on the resulting working response and the quasi-likelihood. The permutation procedure is used to obtain the null distribution of the LRT statistic. We evaluate the permutation-based LRT via simulations and compare it with the score-based variance component test and the tests based on the mixture of chi-square distributions. Finally we apply the permutation-based LRT to multilocus association analysis in the case-control study, where the problem can be investigated under the framework of logistic mixed effects model. The simulations show that the permutation-based LRT can effectively control the type I error rate, while the score test is sometimes slightly conservative and the tests based on mixtures cannot maintain the type I error rate. Our studies also show that the permutation-based LRT has higher power than these existing tests and still maintains a reasonably high power even when the random effects do not follow a normal distribution. The application to GAW17 data also demonstrates that the proposed LRT has a higher probability to identify the association signals than the score test and the tests based on mixtures. In the present paper the permutation-based LRT was developed for variance
Szadkowski, Zbigniew; Fraenkel, E. D.; van den Berg, Ad M.
2013-10-01
We present the FPGA/NIOS implementation of an adaptive finite impulse response (FIR) filter based on linear prediction to suppress radio frequency interference (RFI). This technique will be used for experiments that observe coherent radio emission from extensive air showers induced by ultra-high-energy cosmic rays. These experiments are designed to make a detailed study of the development of the electromagnetic part of air showers. Therefore, these radio signals provide information that is complementary to that obtained by water-Cherenkov detectors which are predominantly sensitive to the particle content of an air shower at ground. The radio signals from air showers are caused by the coherent emission due to geomagnetic and charge-excess processes. These emissions can be observed in the frequency band between 10-100 MHz. However, this frequency range is significantly contaminated by narrow-band RFI and other human-made distortions. A FIR filter implemented in the FPGA logic segment of the front-end electronics of a radio sensor significantly improves the signal-to-noise ratio. In this paper we discuss an adaptive filter which is based on linear prediction. The coefficients for the linear predictor (LP) are dynamically refreshed and calculated in the embedded NIOS processor, which is implemented in the same FPGA chip. The Levinson recursion, used to obtain the filter coefficients, is also implemented in the NIOS and is partially supported by direct multiplication in the DSP blocks of the logic FPGA segment. Tests confirm that the LP can be an alternative to other methods involving multiple time-to-frequency domain conversions using an FFT procedure. These multiple conversions draw heavily on the power consumption of the FPGA and are avoided by the linear prediction approach. Minimization of the power consumption is an important issue because the final system will be powered by solar panels. The FIR filter has been successfully tested in the Altera development kits
DEFF Research Database (Denmark)
Yang, Z.; Izadi-Zamanabadi, R.; Blanke, Mogens
2000-01-01
Based on the model-matching strategy, an adaptive control reconfiguration method for a class of nonlinear control systems is proposed by using the multiple-model scheme. Instead of requiring the nominal and faulty nonlinear systems to match each other directly in some proper sense, three sets...... of LTI models are employed to approximate the faulty, reconfigured and nominal nonlinear systems respectively with respect to the on-line information of the operating system, and a set of compensating modules are proposed and designed so as to make the local LTI model approximating to the reconfigured...... nonlinear system match the corresponding LTI model approximating to the nominal nonlinear system in some optimal sense. The compensating modules are designed by the Pseudo-Inverse Method based on the local LTI models for the nominal and faulty nonlinear systems. Moreover, these modules should update...
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Lundahl, P. Johan
2011-01-01
This article presents a new design of flow-orientation device for the study of bio-macromolecules, including DNA and protein complexes, as well as aggregates such as amyloid fibrils and liposome membranes, using Linear Dichroism (LD) spectroscopy. The design provides a number of technical advantages that should make the device inexpensive to manufacture, easier to use and more reliable than existing techniques. The degree of orientation achieved is of the same order of magnitude as that of the commonly used concentric cylinders Couette flow cell, however, since the device exploits a set of flat strain-free quartz plates, a number of problems associated with refraction and birefringence of light are eliminated, increasing the sensitivity and accuracy of measurement. The device provides similar shear rates to those of the Couette cell but is superior in that the shear rate is constant across the gap. Other major advantages of the design is the possibility to change parts and vary sample volume and path length easily and at a low cost. © 2011 The Royal Society of Chemistry.
CSIR Research Space (South Africa)
Khuluse, S
2013-11-01
Full Text Available compare ordinary and regression kriging models to the Poisson log-linear spatial model (Diggle et al. 1998, Diggle et al. 2007) with and without covariate information in mapping annual average exceedance frequencies of the South African PM10 air quality...
Goldstein, Benjamin A.; Polley, Eric C.; Briggs, Farren B. S.; van der Laan, Mark J.; Hubbard, Alan
2016-01-01
Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In t...
Zeghlache, Samir; Benslimane, Tarak; Bouguerra, Abderrahmen
2017-11-01
In this paper, a robust controller for a three degree of freedom (3 DOF) helicopter control is proposed in presence of actuator and sensor faults. For this purpose, Interval type-2 fuzzy logic control approach (IT2FLC) and sliding mode control (SMC) technique are used to design a controller, named active fault tolerant interval type-2 Fuzzy Sliding mode controller (AFTIT2FSMC) based on non-linear adaptive observer to estimate and detect the system faults for each subsystem of the 3-DOF helicopter. The proposed control scheme allows avoiding difficult modeling, attenuating the chattering effect of the SMC, reducing the rules number of the fuzzy controller. Exponential stability of the closed loop is guaranteed by using the Lyapunov method. The simulation results show that the AFTIT2FSMC can greatly alleviate the chattering effect, providing good tracking performance, even in presence of actuator and sensor faults. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Brunner, D; Kuang, A Q; LaBombard, B; Burke, W
2017-07-01
A new servomotor drive system has been developed for the horizontal reciprocating probe on the Alcator C-Mod tokamak. Real-time measurements of plasma temperature and density-through use of a mirror Langmuir probe bias system-combined with a commercial linear servomotor and controller enable self-adaptive position control. Probe surface temperature and its rate of change are computed in real time and used to control probe insertion depth. It is found that a universal trigger threshold can be defined in terms of these two parameters; if the probe is triggered to retract when crossing the trigger threshold, it will reach the same ultimate surface temperature, independent of velocity, acceleration, or scrape-off layer heat flux scale length. In addition to controlling the probe motion, the controller is used to monitor and control all aspects of the integrated probe drive system.
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Directory of Open Access Journals (Sweden)
Eusebio Eduardo Hernández Martinez
2013-01-01
Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.
Heng, Henry H
2017-02-01
Big-data-omics have promised the success of precision medicine. However, most common diseases belong to adaptive systems where the precision is all but difficult to achieve. In this commentary, I propose a heterogeneity-mediated cellular adaptive model to search for the general model of diseases, which also illustrates why in most non-infectious non-Mendelian diseases the involvement of cellular evolution is less predictable when gene profiles are used. This synthesis is based on the following new observations/concepts: 1) the gene only codes "parts inheritance" while the genome codes "system inheritance" or the entire blueprint; 2) the nature of somatic genetic coding is fuzzy rather than precise, and genetic alterations are not just the results of genetic error but are in fact generated from internal adaptive mechanisms in response to environmental dynamics; 3) stress-response is less specific within cellular evolutionary context when compared to known biochemical specificities; and 4) most medical interventions have their unavoidable uncertainties and often can function as negative harmful stresses as trade-offs. The acknowledgment of diseases as adaptive systems calls for the action to integrate genome- (not simply individual gene-) mediated cellular evolution into molecular medicine. © 2016 John Wiley & Sons, Ltd.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
M. L. Kleptsyna
2001-01-01
Full Text Available The optimal filtering problem for multidimensional continuous possibly non-Markovian, Gaussian processes, observed through a linear channel driven by a Brownian motion, is revisited. Explicit Volterra type filtering equations involving the covariance function of the filtered process are derived both for the conditional mean and for the covariance of the filtering error. The solution of the filtering problem is applied to obtain a Cameron-Martin type formula for Laplace transforms of a quadratic functional of the process. Particular cases for which the results can be further elaborated are investigated.
Wang, Cheng; Guan, Wei; Wang, J. Y.; Zhong, Bineng; Lai, Xiongming; Chen, Yewang; Xiang, Liang
2018-02-01
To adaptively identify the transient modal parameters for linear weakly damped structures with slow time-varying characteristics under unmeasured stationary random ambient loads, this paper proposes a novel operational modal analysis (OMA) method based on the frozen-in coefficient method and limited memory recursive principal component analysis (LMRPCA). In the modal coordinate, the random vibration response signals of mechanical weakly damped structures can be decomposed into the inner product of modal shapes and modal responses, from which the natural frequencies and damping ratios can be well acquired by single-degree-of-freedom (SDOF) identification approach such as FFT. Hence, for the OMA method based on principal component analysis (PCA), it becomes very crucial to examine the relation between the transformational matrix and the modal shapes matrix, to find the association between the principal components (PCs) matrix and the modal responses matrix, and to turn the operational modal parameter identification problem into PCA of the stationary random vibration response signals of weakly damped mechanical structures. Based on the theory of "time-freezing", the method of frozen-in coefficient, and the assumption of "short time invariant" and "quasistationary", the non-stationary random response signals of the weakly damped and slow linear time-varying structures (LTV) can approximately be seen as the stationary random response time series of weakly damped and linear time invariant structures (LTI) in a short interval. Thus, the adaptive identification of time-varying operational modal parameters is turned into decompositing the PCs of stationary random vibration response signals subsection of weakly damped mechanical structures after choosing an appropriate limited memory window. Finally, a three-degree-of-freedom (DOF) structure with weakly damped and slow time-varying mass is presented to illustrate this method of identification. Results show that the LMRPCA
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Kim, Hyunwoo J; Adluru, Nagesh; Collins, Maxwell D; Chung, Moo K; Bendlin, Barbara B; Johnson, Sterling C; Davidson, Richard J; Singh, Vikas
2014-06-23
Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., ( x i , y i ) pairs, are Euclidean is well studied. The setting where y i is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, x i ∈ R , against a manifold-valued variable, y i ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression -a manifold-valued dependent variable against multiple independent variables, i.e., f : R n → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL ( n )/ O ( n ) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm.
Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
DEFF Research Database (Denmark)
Guo, Meng; Elmedyb, Thomas Bo; Jensen, Søren Holdt
2011-01-01
In this paper, we analyze a general multiple-microphone and single-loudspeaker system, where an adaptive algorithm is used to cancel acoustic feedback/echo and a beamformer processes the feedback/echo canceled signals. This system can be viewed as part of a typical hearing aid system and....../or a traditional acoustic echo cancelation system. We introduce and derive an approximation of a useful frequency domain measure - the power transfer function - and show how to predict the system stability bound, convergence rate and the steady-state behavior across time and frequency. Furthermore, we show how...
International Nuclear Information System (INIS)
Steinbrecher, Gyoergy; Weyssow, B.
2004-01-01
The extreme heavy tail and the power-law decay of the turbulent flux correlation observed in hot magnetically confined plasmas are modeled by a system of coupled Langevin equations describing a continuous time linear randomly amplified stochastic process where the amplification factor is driven by a superposition of colored noises which, in a suitable limit, generate a fractional Brownian motion. An exact analytical formula for the power-law tail exponent β is derived. The extremely small value of the heavy tail exponent and the power-law distribution of laminar times also found experimentally are obtained, in a robust manner, for a wide range of input values, as a consequence of the (asymptotic) self-similarity property of the noise spectrum. As a by-product, a new representation of the persistent fractional Brownian motion is obtained
Wang, Ching-Yun; Tapsoba, Jean De Dieu; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne
2016-05-10
In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75 years. Copyright © 2015 John Wiley & Sons, Ltd.
Sugiyama, Toshihiro; Meakin, Lee B; Browne, William J; Galea, Gabriel L; Price, Joanna S; Lanyon, Lance E
2012-08-01
There is a widely held view that the relationship between mechanical loading history and adult bone mass/strength includes an adapted state or "lazy zone" where the bone mass/strength remains constant over a wide range of strain magnitudes. Evidence to support this theory is circumstantial. We investigated the possibility that the "lazy zone" is an artifact and that, across the range of normal strain experience, features of bone architecture associated with strength are linearly related in size to their strain experience. Skeletally mature female C57BL/6 mice were right sciatic neurectomized to minimize natural loading in their right tibiae. From the fifth day, these tibiae were subjected to a single period of external axial loading (40, 10-second rest interrupted cycles) on alternate days for 2 weeks, with a peak dynamic load magnitude ranging from 0 to 14 N (peak strain magnitude: 0-5000 µε) and a constant loading rate of 500 N/s (maximum strain rate: 75,000 µε/s). The left tibiae were used as internal controls. Multilevel regression analyses suggest no evidence of any discontinuity in the progression of the relationships between peak dynamic load and three-dimensional measures of bone mass/strength in both cortical and cancellous regions. These are essentially linear between the low-peak locomotor strains associated with disuse (∼300 µε) and the high-peak strains derived from artificial loading and associated with the lamellar/woven bone transition (∼5000 µε). The strain:response relationship and minimum effective strain are site-specific, probably related to differences in the mismatch in strain distribution between normal and artificial loading at the locations investigated. Copyright © 2012 American Society for Bone and Mineral Research.
Sugiyama, Toshihiro; Meakin, Lee B; Browne, William J; Galea, Gabriel L; Price, Joanna S; Lanyon, Lance E
2012-01-01
There is a widely held view that the relationship between mechanical loading history and adult bone mass/strength includes an adapted state or “lazy zone” where the bone mass/strength remains constant over a wide range of strain magnitudes. Evidence to support this theory is circumstantial. We investigated the possibility that the “lazy zone” is an artifact and that, across the range of normal strain experience, features of bone architecture associated with strength are linearly related in size to their strain experience. Skeletally mature female C57BL/6 mice were right sciatic neurectomized to minimize natural loading in their right tibiae. From the fifth day, these tibiae were subjected to a single period of external axial loading (40, 10-second rest interrupted cycles) on alternate days for 2 weeks, with a peak dynamic load magnitude ranging from 0 to 14 N (peak strain magnitude: 0–5000 µε) and a constant loading rate of 500 N/s (maximum strain rate: 75,000 µε/s). The left tibiae were used as internal controls. Multilevel regression analyses suggest no evidence of any discontinuity in the progression of the relationships between peak dynamic load and three-dimensional measures of bone mass/strength in both cortical and cancellous regions. These are essentially linear between the low-peak locomotor strains associated with disuse (∼300 µε) and the high-peak strains derived from artificial loading and associated with the lamellar/woven bone transition (∼5000 µε). The strain:response relationship and minimum effective strain are site-specific, probably related to differences in the mismatch in strain distribution between normal and artificial loading at the locations investigated. © 2012 American Society for Bone and Mineral Research. PMID:22431329
Diffractive generalized phase contrast for adaptive phase imaging and optical security
DEFF Research Database (Denmark)
Palima, Darwin; Glückstad, Jesper
2012-01-01
We analyze the properties of Generalized Phase Contrast (GPC) when the input phase modulation is implemented using diffractive gratings. In GPC applications for patterned illumination, the use of a dynamic diffractive optical element for encoding the GPC input phase allows for onthe- fly...... optimization of the input aperture parameters according to desired output characteristics. For wavefront sensing, the achieved aperture control opens a new degree of freedom for improving the accuracy of quantitative phase imaging. Diffractive GPC input modulation also fits well with grating-based optical...... security applications and can be used to create phasebased information channels for enhanced information security....
Maia, M. R. G.; Pires, N.; Gimenes, H. S.
2015-01-01
Interactions between cosmic fluids may appear in many cosmological scenarios that go far beyond the usually studied energy exchange in the dark sector. In the absence of known microscopic interaction mechanisms, phenomenological ansatzes are usually proposed in order to describe such models. In this paper, we derive a generalization of one of the most frequently used of such ansatzes:the one based on a initial proposal of Shapiro, Sol\\`a, Espa\\~na-Bonet and Ruiz-Lapuente who described a time-...
Energy Technology Data Exchange (ETDEWEB)
Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)
2015-12-15
The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)
DEFF Research Database (Denmark)
Jacobsen, Martin; Martinussen, Torben
2016-01-01
Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper...
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Directory of Open Access Journals (Sweden)
Anthony C. Akpanta
2017-11-01
Full Text Available The act of violence against wife is condemnable and attracts various legal penalties, globally. This article attempts to find a link between spousal age difference and violence (Emotional, Physical and Sexual against wives in Nigeria. The result show that wives who are older than their partners are more likely to experience sexual and emotional violence; also, wives who are same age as their husbands are more likely to experience sexual violence; whereas wives who are 1-4 years younger than their husbands are more likely to experience physical violence; while wives 5 years or more younger than their husbands are generally less likely to experience any form of violence.
Energy Technology Data Exchange (ETDEWEB)
Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
Directory of Open Access Journals (Sweden)
Somayyeh Lotfi Noghabi
2012-07-01
Full Text Available Introduction: Epilepsy is a clinical syndrome in which seizures have a tendency to recur. Sodium valproate is the most effective drug in the treatment of all types of generalized seizures. Finding the optimal dosage (the lowest effective dose of sodium valproate is a real challenge to all neurologists. In this study, a new approach based on Adaptive Neuro-Fuzzy Inference System (ANFIS was presented for estimating the optimal dosage of sodium valproate in IGE (Idiopathic Generalized Epilepsy patients. Methods: 40 patients with Idiopathic Generalized Epilepsy, who were referred to the neurology department of Mashhad University of Medical Sciences between the years 2006-2011, were included in this study. The function Adaptive Neuro- Fuzzy Inference System (ANFIS constructs a Fuzzy Inference System (FIS whose membership function parameters are tuned (adjusted using either a back-propagation algorithm alone, or in combination with the least squares type of method (hybrid algorithm. In this study, we used hybrid method for adjusting the parameters. Methods: The R-square of the proposed system was %598 and the Pearson correlation coefficient was significant (P 0.05. Although the accuracy of the model was not high, it wasgood enough to be applied for treating the IGE patients with sodium valproate. Discussion: This paper presented a new application of ANFIS for estimating the optimal dosage of sodium valproate in IGE patients. Fuzzy set theory plays an important role in dealing with uncertainty when making decisions in medical applications. Collectively, it seems that ANFIS has a high capacity to be applied in medical sciences, especially neurology.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Nishimura, G; Nagai, T
1998-01-01
The case of a Japanese girl with a unique combination of congenital malformations is reported. The malformations include craniofacial dysmorphism, congenital heart defects, coccygeal skin folds, generalized skeletal alterations, and hemihypertrophy with linear skin hypopigmentation that indicated somatic mosaicism of a mutated gene or a submicroscopic chromosomal aberration. The phenotype in our patient overlapped significantly with, but was not completely consistent with, that of ter Haar syndrome, a recently elucidated malformation syndrome with an autosomal recessive trait. The present patient may have represented a previously undescribed malformation syndrome, or an atypical manifestation of ter Haar syndrome due to somatic mosaicism.
Cho, Sun-Joo; Brown-Schmidt, Sarah; Lee, Woo-Yeol
2018-02-07
As a method to ascertain person and item effects in psycholinguistics, a generalized linear mixed effect model (GLMM) with crossed random effects has met limitations in handing serial dependence across persons and items. This paper presents an autoregressive GLMM with crossed random effects that accounts for variability in lag effects across persons and items. The model is shown to be applicable to intensive binary time series eye-tracking data when researchers are interested in detecting experimental condition effects while controlling for previous responses. In addition, a simulation study shows that ignoring lag effects can lead to biased estimates and underestimated standard errors for the experimental condition effects.
Directory of Open Access Journals (Sweden)
Vivek Singh Bawa
2017-06-01
Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.
Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y
2008-02-18
The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.
Indian Academy of Sciences (India)
Page S20: NMR compound 4i. Page S22: NMR compound 4j. General: Chemicals were purchased from Fluka, Merck and Aldrich Chemical Companies. All the products were characterized by comparison of their IR, 1H NMR and 13C NMR spectroscopic data and their melting points with reported values. General procedure ...
Directory of Open Access Journals (Sweden)
Pérez-Páramo María
2010-01-01
Full Text Available Abstract Background Generalized anxiety disorder (GAD is a prevalent mental health condition which is underestimated worldwide. This study carried out the cultural adaptation into Spanish of the 7-item self-administered GAD-7 scale, which is used to identify probable patients with GAD. Methods The adaptation was performed by an expert panel using a conceptual equivalence process, including forward and backward translations in duplicate. Content validity was assessed by interrater agreement. Criteria validity was explored using ROC curve analysis, and sensitivity, specificity, predictive positive value and negative value for different cut-off values were determined. Concurrent validity was also explored using the HAM-A, HADS, and WHO-DAS-II scales. Results The study sample consisted of 212 subjects (106 patients with GAD with a mean age of 50.38 years (SD = 16.76. Average completion time was 2'30''. No items of the scale were left blank. Floor and ceiling effects were negligible. No patients with GAD had to be assisted to fill in the questionnaire. The scale was shown to be one-dimensional through factor analysis (explained variance = 72%. A cut-off point of 10 showed adequate values of sensitivity (86.8% and specificity (93.4%, with AUC being statistically significant [AUC = 0.957-0.985; p 0.001. Limitations Elderly people, particularly those very old, may need some help to complete the scale. Conclusion After the cultural adaptation process, a Spanish version of the GAD-7 scale was obtained. The validity of its content and the relevance and adequacy of items in the Spanish cultural context were confirmed.