Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Ye, Dan; Chen, Mengmeng; Li, Kui
2017-11-01
In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Directory of Open Access Journals (Sweden)
Nauman Khalid Qureshi
2017-07-01
Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.
Linear versus non-linear supersymmetry, in general
Energy Technology Data Exchange (ETDEWEB)
Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)
2016-04-12
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Linear versus non-linear supersymmetry, in general
International Nuclear Information System (INIS)
Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm
2016-01-01
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Linear zonal atmospheric prediction for adaptive optics
McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael
2000-07-01
We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.
Linear ubiquitination signals in adaptive immune responses.
Ikeda, Fumiyo
2015-07-01
Ubiquitin can form eight different linkage types of chains using the intrinsic Met 1 residue or one of the seven intrinsic Lys residues. Each linkage type of ubiquitin chain has a distinct three-dimensional topology, functioning as a tag to attract specific signaling molecules, which are so-called ubiquitin readers, and regulates various biological functions. Ubiquitin chains linked via Met 1 in a head-to-tail manner are called linear ubiquitin chains. Linear ubiquitination plays an important role in the regulation of cellular signaling, including the best-characterized tumor necrosis factor (TNF)-induced canonical nuclear factor-κB (NF-κB) pathway. Linear ubiquitin chains are specifically generated by an E3 ligase complex called the linear ubiquitin chain assembly complex (LUBAC) and hydrolyzed by a deubiquitinase (DUB) called ovarian tumor (OTU) DUB with linear linkage specificity (OTULIN). LUBAC linearly ubiquitinates critical molecules in the TNF pathway, such as NEMO and RIPK1. The linear ubiquitin chains are then recognized by the ubiquitin readers, including NEMO, which control the TNF pathway. Accumulating evidence indicates an importance of the LUBAC complex in the regulation of apoptosis, development, and inflammation in mice. In this article, I focus on the role of linear ubiquitin chains in adaptive immune responses with an emphasis on the TNF-induced signaling pathways. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Generalized Cross-Gramian for Linear Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
2012-01-01
The cross-gramian is a well-known matrix with embedded controllability and observability information. The cross-gramian is related to the Hankel operator and the Hankel singular values of a linear square system and it has several interesting properties. These properties make the cross...... square symmetric systems, the ordinary cross-gramian does not exist. To cope with this problem, a new generalized cross-gramian is introduced in this paper. In contrast to the ordinary cross-gramian, the generalized cross-gramian can be easily obtained for general linear systems and therefore can be used...
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Generalized non-linear Schroedinger hierarchy
International Nuclear Information System (INIS)
Aratyn, H.; Gomes, J.F.; Zimerman, A.H.
1994-01-01
The importance in studying the completely integrable models have became evident in the last years due to the fact that those models present an algebraic structure extremely rich, providing the natural scenery for solitons description. Those models can be described through non-linear differential equations, pseudo-linear operators (Lax formulation), or a matrix formulation. The integrability implies in the existence of a conservation law associated to each of degree of freedom. Each conserved charge Q i can be associated to a Hamiltonian, defining a time evolution related to to a time t i through the Hamilton equation ∂A/∂t i =[A,Q i ]. Particularly, for a two-dimensions field theory, infinite degree of freedom exist, and consequently infinite conservation laws describing the time evolution in space of infinite times. The Hamilton equation defines a hierarchy of models which present a infinite set of conservation laws. This paper studies the generalized non-linear Schroedinger hierarchy
General solution of linear vector supersymmetry
International Nuclear Information System (INIS)
Blasi, Alberto; Maggiore, Nicola
2007-01-01
We give the general solution of the Ward identity for the linear vector supersymmetry which characterizes all topological models. Such a solution, whose expression is quite compact and simple, greatly simplifies the study of theories displaying a supersymmetric algebraic structure, reducing to a few lines the proof of their possible finiteness. In particular, the cohomology technology, usually involved for the quantum extension of these theories, is completely bypassed. The case of Chern-Simons theory is taken as an example
Identification of general linear mechanical systems
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1983-01-01
Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.
Dynamic generalized linear models for monitoring endemic diseases
DEFF Research Database (Denmark)
Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq
2016-01-01
The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...
Gravitational Wave in Linear General Relativity
Cubillos, D. J.
2017-07-01
General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.
Aspects of general linear modelling of migration.
Congdon, P
1992-01-01
"This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt
Generalized Linear Models in Vehicle Insurance
Directory of Open Access Journals (Sweden)
Silvie Kafková
2014-01-01
Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.
Adaptive H∞ synchronization of chaotic systems via linear and nonlinear feedback control
International Nuclear Information System (INIS)
Fu Shi-Hui; Lu Qi-Shao; Du Ying
2012-01-01
Adaptive H ∞ synchronization of chaotic systems via linear and nonlinear feedback control is investigated. The chaotic systems are redesigned by using the generalized Hamiltonian systems and observer approach. Based on Lyapunov's stability theory, linear and nonlinear feedback control of adaptive H ∞ synchronization is established in order to not only guarantee stable synchronization of both master and slave systems but also reduce the effect of external disturbance on an H ∞ -norm constraint. Adaptive H ∞ synchronization of chaotic systems via three kinds of control is investigated with applications to Lorenz and Chen systems. Numerical simulations are also given to identify the effectiveness of the theoretical analysis. (general)
Robust adaptive synchronization of general dynamical networks ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. Robust ... A robust adaptive synchronization scheme for these general complex networks with multiple delays and uncertainties is established and raised by employing the robust adaptive control principle and the Lyapunov stability theory. We choose ...
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Adaptive Inference on General Graphical Models
Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur
2012-01-01
Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...
Linear Perturbation Adaptive Control of Hydraulically Driven Manipulators
DEFF Research Database (Denmark)
Andersen, T.O.; Hansen, M.R.; Conrad, Finn
2004-01-01
control.Using the Lyapunov approach, under slowly time-varying assumptions, it is shown that the tracking error and the parameter error remain bounded. This bound is a function of the ideal parameters and a bounded disturbance. The control algorithm decouples and linearizes the manipulator so that each......A method for synthesis of a robust adaptive scheme for a hydraulically driven manipulator, that takes full advantage of any known system dynamics to simplify the adaptive control problem for the unknown portion of the dynamics is presented. The control method is based on adaptive perturbation...
Generalized 2-vector spaces and general linear 2-groups
Elgueta, Josep
2008-01-01
In this paper a notion of {\\it generalized 2-vector space} is introduced which includes Kapranov and Voevodsky 2-vector spaces. Various kinds of generalized 2-vector spaces are considered and examples are given. The existence of non free generalized 2-vector spaces and of generalized 2-vector spaces which are non Karoubian (hence, non abelian) categories is discussed, and it is shown how any generalized 2-vector space can be identified with a full subcategory of an (abelian) functor category ...
Smooth generalized linear models for aggregated data
Ayma Anza, Diego Armando
2016-01-01
Mención Internacional en el título de doctor Aggregated data commonly appear in areas such as epidemiology, demography, and public health. Generally, the aggregation process is done to protect the privacy of patients, to facilitate compact presentation, or to make it comparable with other coarser datasets. However, this process may hinder the visualization of the underlying distribution that follows the data. Also, it prohibits the direct analysis of relationships between ag...
Generalized local homology and cohomology for linearly compact modules
International Nuclear Information System (INIS)
Tran Tuan Nam
2006-07-01
We study generalized local homology for linearly compact modules. By duality, we get some properties of generalized local cohomology modules and extend well-known properties of local cohomology of A. Grothendieck. (author)
Rapid, generalized adaptation to asynchronous audiovisual speech.
Van der Burg, Erik; Goodbourn, Patrick T
2015-04-07
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
Generalized Multicarrier CDMA: Unification and Linear Equalization
Directory of Open Access Journals (Sweden)
Wang Zhengdao
2005-01-01
Full Text Available Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC quasisynchronous CDMA system that is capable of multiuser interference (MUI elimination and intersymbol interference (ISI suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.
Adaptive feedback linearization applied to steering of ships
Directory of Open Access Journals (Sweden)
Thor I. Fossen
1993-10-01
Full Text Available This paper describes the application of feedback linearization to automatic steering of ships. The flexibility of the design procedure allows the autopilot to be optimized for both course-keeping and course-changing manoeuvres. Direct adaptive versions of both the course-keeping and turning controller are derived. The advantages of the adaptive controllers are improved performance and reduced fuel consumption. The application of nonlinear control theory also allows the designer in a systematic manner to compensate for nonlinearities in the control design.
Generalizing a categorization of students' interpretations of linear kinematics graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and ...
Generalized Linear Models with Applications in Engineering and the Sciences
Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J
2012-01-01
Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Minimal solution of general dual fuzzy linear systems
International Nuclear Information System (INIS)
Abbasbandy, S.; Otadi, M.; Mosleh, M.
2008-01-01
Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Downie, John D.; Goodman, Joseph W.
1989-10-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Adaptive phase measurements in linear optical quantum computation
International Nuclear Information System (INIS)
Ralph, T C; Lund, A P; Wiseman, H M
2005-01-01
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically
About one non linear generalization of the compression reflection ...
African Journals Online (AJOL)
Both cases of stage and spiral iterations are considered. A geometrical interpretation of a convergence of a generalize method of iteration is brought, the case of stage and spiral iterations are considered. The formula for the non linear generalize compression reflection operator as a function from one variable is obtained.
McDonald Generalized Linear Failure Rate Distribution
Directory of Open Access Journals (Sweden)
Ibrahim Elbatal
2014-10-01
Full Text Available We introduce in this paper a new six-parameters generalized version of the generalized linear failure rate (GLFR distribution which is called McDonald Generalized Linear failure rate (McGLFR distribution. The new distribution is quite flexible and can be used effectively in modeling survival data and reliability problems. It can have a constant, decreasing, increasing, and upside down bathtub-and bathtub shaped failure rate function depending on its parameters. It includes some well-known lifetime distributions as special sub-models. Some structural properties of the new distribution are studied. Moreover we discuss maximum likelihood estimation of the unknown parameters of the new model.
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
Adaptive linear rank tests for eQTL studies.
Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas
2013-02-10
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Adaptive discontinuous Galerkin methods for non-linear reactive flows
Uzunca, Murat
2016-01-01
The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.
An implicit spectral formula for generalized linear Schroedinger equations
International Nuclear Information System (INIS)
Schulze-Halberg, A.; Garcia-Ravelo, J.; Pena Gil, Jose Juan
2009-01-01
We generalize the semiclassical Bohr–Sommerfeld quantization rule to an exact, implicit spectral formula for linear, generalized Schroedinger equations admitting a discrete spectrum. Special cases include the position-dependent mass Schroedinger equation or the Schroedinger equation for weighted energy. Requiring knowledge of the potential and the solution associated with the lowest spectral value, our formula predicts the complete spectrum in its exact form. (author)
Solution of generalized shifted linear systems with complex symmetric matrices
International Nuclear Information System (INIS)
Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo
2012-01-01
We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green’s function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1–9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126–140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.
New Implicit General Linear Method | Ibrahim | Journal of the ...
African Journals Online (AJOL)
A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Generalizing a categorization of students’ interpretations of linear kinematics graphs
Directory of Open Access Journals (Sweden)
Laurens Bollen
2016-02-01
Full Text Available We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven and the Basque Country, Spain (University of the Basque Country. We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
Generalizing a categorization of students' interpretations of linear kinematics graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J
2017-09-29
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Adaptive fuzzy bilinear observer based synchronization design for generalized Lorenz system
International Nuclear Information System (INIS)
Baek, Jaeho; Lee, Heejin; Kim, Seungwoo; Park, Mignon
2009-01-01
This Letter proposes an adaptive fuzzy bilinear observer (FBO) based synchronization design for generalized Lorenz system (GLS). The GLS can be described to TS fuzzy bilinear generalized Lorenz model (FBGLM) with their states immeasurable and their parameters unknown. We design an adaptive FBO based on TS FBGLM for synchronization. Lyapunov theory is employed to guarantee the stability of error dynamic system via linear matrix equalities (LMIs) and to derive the adaptive laws to estimate unknown parameters. Numerical example is given to demonstrate the validity of our proposed adaptive FBO approach for synchronization.
Joiner, Wilsaan M; Ajayi, Obafunso; Sing, Gary C; Smith, Maurice A
2011-01-01
The ability to generalize learned motor actions to new contexts is a key feature of the motor system. For example, the ability to ride a bicycle or swing a racket is often first developed at lower speeds and later applied to faster velocities. A number of previous studies have examined the generalization of motor adaptation across movement directions and found that the learned adaptation decays in a pattern consistent with the existence of motor primitives that display narrow Gaussian tuning. However, few studies have examined the generalization of motor adaptation across movement speeds. Following adaptation to linear velocity-dependent dynamics during point-to-point reaching arm movements at one speed, we tested the ability of subjects to transfer this adaptation to short-duration higher-speed movements aimed at the same target. We found near-perfect linear extrapolation of the trained adaptation with respect to both the magnitude and the time course of the velocity profiles associated with the high-speed movements: a 69% increase in movement speed corresponded to a 74% extrapolation of the trained adaptation. The close match between the increase in movement speed and the corresponding increase in adaptation beyond what was trained indicates linear hypergeneralization. Computational modeling shows that this pattern of linear hypergeneralization across movement speeds is not compatible with previous models of adaptation in which motor primitives display isotropic Gaussian tuning of motor output around their preferred velocities. Instead, we show that this generalization pattern indicates that the primitives involved in the adaptation to viscous dynamics display anisotropic tuning in velocity space and encode the gain between motor output and motion state rather than motor output itself.
Neural Generalized Predictive Control of a non-linear Process
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.
2012-01-01
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Testing for one Generalized Linear Single Order Parameter
DEFF Research Database (Denmark)
Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe
We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation...... responses or extrapolate from measurements of a glassy state away from equilibrium. Starting from a master equation description of inherent dynamics, we calculate the complex thermodynamic response functions. We device a way of testing for the generalized single order parameter model by measuring 3 complex...
Linear relativistic gyrokinetic equation in general magnetically confined plasmas
International Nuclear Information System (INIS)
Tsai, S.T.; Van Dam, J.W.; Chen, L.
1983-08-01
The gyrokinetic formalism for linear electromagnetic waves of arbitrary frequency in general magnetic-field configurations is extended to include full relativistic effects. The derivation employs the small adiabaticity parameter rho/L 0 where rho is the Larmor radius and L 0 the equilibrium scale length. The effects of the plasma and magnetic field inhomogeneities and finite Larmor-radii effects are also contained
A general method for enclosing solutions of interval linear equations
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2012-01-01
Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012
General treatment of a non-linear gauge condition
International Nuclear Information System (INIS)
Malleville, C.
1982-06-01
A non linear gauge condition is presented in the frame of a non abelian gauge theory broken with the Higgs mechanism. It is shown that this condition already introduced for the standard SU(2) x U(1) model can be generalized for any gauge model with the same type of simplification, namely the suppression of any coupling of the form: massless gauge boson, massive gauge boson, unphysical Higgs [fr
Canonical perturbation theory in linearized general relativity theory
International Nuclear Information System (INIS)
Gonzales, R.; Pavlenko, Yu.G.
1986-01-01
Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field
Electromagnetic axial anomaly in a generalized linear sigma model
Fariborz, Amir H.; Jora, Renata
2017-06-01
We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.
Combined adapter for the upgraded cryomodule of the linear collider
International Nuclear Information System (INIS)
Budagov, Yu.; Shirkov, G.; Sabirov, B.; Dobrushin, L.; Bryzgalin, A.; Pekar, E.; Illarionov, S.; Bedeschi, F.; Basti, A.; Fabbricatore, P.
2015-01-01
As part of work on the ILC Project, research was performed on the development of techniques to simplify and make reliable and cheaper the construction of the cryomodules that are core of the main linac. In the current ILC TDR design both the helium vessel surrounding the niobium RF cavities and the connected pipes which channel the exhaust helium gas are made of expensive titanium, one of the few metals that can be welded to niobium by the electron beam technique. In this paper we describe the construction and performance of transition elements, obtained by explosion welding, that can couple the niobium cavity with a stainless steel helium vessel, thus saving large amounts of titanium. A new design, including a minimal titanium intermediate layer, has been built. Preliminary tests yielded a very strong resistance of the bond to extreme temperature shocks from electron beam welding to exposure to cryogenic temperatures. The developed technology allows a trimetallic billet for manufacturing an adapter to be made such that the niobium-titanium bond is free of intermetallic compounds and the effect of the difference in the linear expansion coefficients of the ensemble components is eliminated.
Robust adaptive synchronization of general dynamical networks ...
Indian Academy of Sciences (India)
Robust adaptive synchronization; dynamical network; multiple delays; multiple uncertainties. ... Networks such as neural networks, communication transmission networks, social rela- tionship networks etc. ..... a very good effect. Pramana – J.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de; Kühn, Oliver [Institute of Physics, Rostock University, Universitätsplatz 3, 18055 Rostock (Germany)
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
International Nuclear Information System (INIS)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver
2015-01-01
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom
Hypothermic general cold adaptation induced by local cold acclimation.
Savourey, G; Barnavol, B; Caravel, J P; Feuerstein, C; Bittel, J H
1996-01-01
To study relationships between local cold adaptation of the lower limbs and general cold adaptation, eight subjects were submitted both to a cold foot test (CFT, 5 degrees C water immersion, 5 min) and to a whole-body standard cold air test (SCAT, 1 degree C, 2 h, nude at rest) before and after a local cold acclimation (LCA) of the lower limbs effected by repeated cold water immersions. The LCA induced a local cold adaptation confirmed by higher skin temperatures of the lower limbs during CFT and a hypothermic insulative general cold adaptation (decreased rectal temperature and mean skin temperature P adaptation was related to the habituation process confirmed by decreased plasma concentrations of noradrenaline (NA) during LCA (P general cold adaptation was unrelated either to local cold adaptation or to the habituation process, because an increased NA during SCAT after LCA (P syndrome" occurring during LCA.
Cazzulani, Gabriele; Resta, Ferruccio; Ripamonti, Francesco
2012-04-01
During the last years, more and more mechanical applications saw the introduction of active control strategies. In particular, the need of improving the performances and/or the system health is very often associated to vibration suppression. This goal can be achieved considering both passive and active solutions. In this sense, many active control strategies have been developed, such as the Independent Modal Space Control (IMSC) or the resonant controllers (PPF, IRC, . . .). In all these cases, in order to tune and optimize the control strategy, the knowledge of the system dynamic behaviour is very important and it can be achieved both considering a numerical model of the system or through an experimental identification process. Anyway, dealing with non-linear or time-varying systems, a tool able to online identify the system parameters becomes a key-point for the control logic synthesis. The aim of the present work is the definition of a real-time technique, based on ARMAX models, that estimates the system parameters starting from the measurements of piezoelectric sensors. These parameters are returned to the control logic, that automatically adapts itself to the system dynamics. The problem is numerically investigated considering a carbon-fiber plate model forced through a piezoelectric patch.
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...
A Graphical User Interface to Generalized Linear Models in MATLAB
Directory of Open Access Journals (Sweden)
Peter Dunn
1999-07-01
Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.
General mirror pairs for gauged linear sigma models
Energy Technology Data Exchange (ETDEWEB)
Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)
2015-11-05
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
General mirror pairs for gauged linear sigma models
International Nuclear Information System (INIS)
Aspinwall, Paul S.; Plesser, M. Ronen
2015-01-01
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems
Directory of Open Access Journals (Sweden)
Xinbo Zhang
2014-01-01
Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.
Generalized space and linear momentum operators in quantum mechanics
International Nuclear Information System (INIS)
Costa, Bruno G. da; Borges, Ernesto P.
2014-01-01
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p ^ q , and its canonically conjugate deformed position operator x ^ q . A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
Adaptive Kronrod-Patterson integration of non-linear finite-element matrices
DEFF Research Database (Denmark)
Janssen, Hans
2010-01-01
inappropriate discretization. In response, this article develops adaptive integration, based on nested Kronrod-Patterson-Gauss integration schemes: basically, the integration order is adapted to the locally observed grade of non-linearity. Adaptive integration is developed based on a standard infiltration...
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Adaptive Non-linear Control of Hydraulic Actuator Systems
DEFF Research Database (Denmark)
Hansen, Poul Erik; Conrad, Finn
1998-01-01
Presentation of two new developed adaptive non-liner controllers for hydraulic actuator systems to give stable operation and improved performance.Results from the IMCIA project supported by the Danish Technical Research Council (STVF).......Presentation of two new developed adaptive non-liner controllers for hydraulic actuator systems to give stable operation and improved performance.Results from the IMCIA project supported by the Danish Technical Research Council (STVF)....
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Adjoint based model adaptation for a linear problem
Cnossen, J.M.; Bijl, H.; Koren, B.; Brummelen, van E.H.
2004-01-01
In aerospace engineering CFD is often applied to obtain values for quantities of interest which are global functionals of the solution. To optimise the balance between accuracy of the computed functional and CPU time we focus on dual-weighted adaptive hierarchical modelling of fluid flow. In this
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Multivariate statistical modelling based on generalized linear models
Fahrmeir, Ludwig
1994-01-01
This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua
2010-06-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali
2016-09-06
The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua; Wang, Naisyin; Carroll, Raymond J.
2010-01-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
A generalized adaptive mathematical morphological filter for LIDAR data
Cui, Zheng
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in
Synchronization of generalized Henon map by using adaptive fuzzy controller
Energy Technology Data Exchange (ETDEWEB)
Xue Yueju E-mail: xueyj@mail.tsinghua.edu.cn; Yang Shiyuan E-mail: ysy-dau@tsinghua.edu.cn
2003-08-01
In this paper, an adaptive fuzzy control method is presented to synchronize model-unknown discrete-time generalized Henon map. The proposed method is robust to approximate errors and disturbances, because it integrates the merits of adaptive fuzzy and the variable structure control. Moreover, it can realize the synchronizations of non-identical chaotic systems. The simulation results of synchronization of generalized Henon map show that it not only can synchronize model-unknown generalized Henon map but also is robust against the noise of the systems. These merits are advantageous for engineering realization.
Synchronization of generalized Henon map by using adaptive fuzzy controller
International Nuclear Information System (INIS)
Xue Yueju; Yang Shiyuan
2003-01-01
In this paper, an adaptive fuzzy control method is presented to synchronize model-unknown discrete-time generalized Henon map. The proposed method is robust to approximate errors and disturbances, because it integrates the merits of adaptive fuzzy and the variable structure control. Moreover, it can realize the synchronizations of non-identical chaotic systems. The simulation results of synchronization of generalized Henon map show that it not only can synchronize model-unknown generalized Henon map but also is robust against the noise of the systems. These merits are advantageous for engineering realization
Solving Fully Fuzzy Linear System of Equations in General Form
Directory of Open Access Journals (Sweden)
A. Yousefzadeh
2012-06-01
Full Text Available In this work, we propose an approach for computing the positive solution of a fully fuzzy linear system where the coefficient matrix is a fuzzy $nimes n$ matrix. To do this, we use arithmetic operations on fuzzy numbers that introduced by Kaffman in and convert the fully fuzzy linear system into two $nimes n$ and $2nimes 2n$ crisp linear systems. If the solutions of these linear systems don't satisfy in positive fuzzy solution condition, we introduce the constrained least squares problem to obtain optimal fuzzy vector solution by applying the ranking function in given fully fuzzy linear system. Using our proposed method, the fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.
Adaptive Generation and Diagnostics of Linear Few-Cycle Light Bullets
Directory of Open Access Journals (Sweden)
Martin Bock
2013-02-01
Full Text Available Recently we introduced the class of highly localized wavepackets (HLWs as a generalization of optical Bessel-like needle beams. Here we report on the progress in this field. In contrast to pulsed Bessel beams and Airy beams, ultrashort-pulsed HLWs propagate with high stability in both spatial and temporal domain, are nearly paraxial (supercollimated, have fringe-less spatial profiles and thus represent the best possible approximation to linear “light bullets”. Like Bessel beams and Airy beams, HLWs show self-reconstructing behavior. Adaptive HLWs can be shaped by ultraflat three-dimensional phase profiles (generalized axicons which are programmed via calibrated grayscale maps of liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs. Light bullets of even higher complexity can either be freely formed from quasi-continuous phase maps or discretely composed from addressable arrays of identical nondiffracting beams. The characterization of few-cycle light bullets requires spatially resolved measuring techniques. In our experiments, wavefront, pulse and phase were detected with a Shack-Hartmann wavefront sensor, 2D-autocorrelation and spectral phase interferometry for direct electric-field reconstruction (SPIDER. The combination of the unique propagation properties of light bullets with the flexibility of adaptive optics opens new prospects for applications of structured light like optical tweezers, microscopy, data transfer and storage, laser fusion, plasmon control or nonlinear spectroscopy.
Non-linear and adaptive control of a refrigeration system
DEFF Research Database (Denmark)
Rasmussen, Henrik; Larsen, Lars F. S.
2011-01-01
are capable of adapting to variety of systems. This paper proposes a novel method for superheat and capacity control of refrigeration systems; namely by controlling the superheat by the compressor speed and capacity by the refrigerant flow. A new low order nonlinear model of the evaporator is developed......In a refrigeration process heat is absorbed in an evaporator by evaporating a flow of liquid refrigerant at low pressure and temperature. Controlling the evaporator inlet valve and the compressor in such a way that a high degree of liquid filling in the evaporator is obtained at all compressor...... capacities ensures a high energy efficiency. The level of liquid filling is indirectly measured by the superheat. Introduction of variable speed compressors and electronic expansion valves enables the use of more sophisticated control algorithms, giving a higher degree of performance and just as important...
DEFF Research Database (Denmark)
Porto da Silva, Edson; Zibar, Darko
2016-01-01
Simple analytical widely linear complex-valued models for IQ-imbalance and IQ-skew effects in multicarrier transmitters are presented. To compensate for such effects, a 4×4 MIMO widely linear adaptive equalizer is proposed and experimentally validated....
Cavity characterization for general use in linear electron accelerators
International Nuclear Information System (INIS)
Souza Neto, M.V. de.
1985-01-01
The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Generalization of the linear algebraic method to three dimensions
International Nuclear Information System (INIS)
Lynch, D.L.; Schneider, B.I.
1991-01-01
We present a numerical method for the solution of the Lippmann-Schwinger equation for electron-molecule collisions. By performing a three-dimensional numerical quadrature, this approach avoids both a basis-set representation of the wave function and a partial-wave expansion of the scattering potential. The resulting linear equations, analogous in form to the one-dimensional linear algebraic method, are solved with the direct iteration-variation method. Several numerical examples are presented. The prospect for using this numerical quadrature scheme for electron-polyatomic molecules is discussed
Directory of Open Access Journals (Sweden)
Shangli Zhang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss
General guidelines solution for linear programming with fuzzy coefficients
Directory of Open Access Journals (Sweden)
Sergio Gerardo de los Cobos Silva
2013-08-01
Full Text Available This work introduce to the Possibilistic Programming and the Fuzzy Programming as paradigms that allow to resolve problems of linear programming when the coefficients of the model or the restrictions on the same are presented as fuzzy numbers, rather than exact numbers (crisp. This work presents some examples based on [1].
A General Linear Method for Equating with Small Samples
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Unmasking the linear behaviour of slow motor adaptation to prolonged convergence.
Erkelens, Ian M; Thompson, Benjamin; Bobier, William R
2016-06-01
Adaptation to changing environmental demands is central to maintaining optimal motor system function. Current theories suggest that adaptation in both the skeletal-motor and oculomotor systems involves a combination of fast (reflexive) and slow (recalibration) mechanisms. Here we used the oculomotor vergence system as a model to investigate the mechanisms underlying slow motor adaptation. Unlike reaching with the upper limbs, vergence is less susceptible to changes in cognitive strategy that can affect the behaviour of motor adaptation. We tested the hypothesis that mechanisms of slow motor adaptation reflect early neural processing by assessing the linearity of adaptive responses over a large range of stimuli. Using varied disparity stimuli in conflict with accommodation, the slow adaptation of tonic vergence was found to exhibit a linear response whereby the rate (R(2) = 0.85, P < 0.0001) and amplitude (R(2) = 0.65, P < 0.0001) of the adaptive effects increased proportionally with stimulus amplitude. These results suggest that this slow adaptive mechanism is an early neural process, implying a fundamental physiological nature that is potentially dominated by subcortical and cerebellar substrates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Dark energy cosmology with generalized linear equation of state
International Nuclear Information System (INIS)
Babichev, E; Dokuchaev, V; Eroshenko, Yu
2005-01-01
Dark energy with the usually used equation of state p = wρ, where w const 0 ), where the constants α and ρ 0 are free parameters. This non-homogeneous linear equation of state provides the description of both hydrodynamically stable (α > 0) and unstable (α < 0) fluids. In particular, the considered cosmological model describes the hydrodynamically stable dark (and phantom) energy. The possible types of cosmological scenarios in this model are determined and classified in terms of attractors and unstable points by using phase trajectories analysis. For the dark energy case, some distinctive types of cosmological scenarios are possible: (i) the universe with the de Sitter attractor at late times, (ii) the bouncing universe, (iii) the universe with the big rip and with the anti-big rip. In the framework of a linear equation of state the universe filled with a phantom energy, w < -1, may have either the de Sitter attractor or the big rip
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Generalized projective synchronization of chaotic systems via adaptive learning control
International Nuclear Information System (INIS)
Yun-Ping, Sun; Jun-Min, Li; Hui-Lin, Wang; Jiang-An, Wang
2010-01-01
In this paper, a learning control approach is applied to the generalized projective synchronisation (GPS) of different chaotic systems with unknown periodically time-varying parameters. Using the Lyapunov–Krasovskii functional stability theory, a differential-difference mixed parametric learning law and an adaptive learning control law are constructed to make the states of two different chaotic systems asymptotically synchronised. The scheme is successfully applied to the generalized projective synchronisation between the Lorenz system and Chen system. Moreover, numerical simulations results are used to verify the effectiveness of the proposed scheme. (general)
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Single image super-resolution using locally adaptive multiple linear regression.
Yu, Soohwan; Kang, Wonseok; Ko, Seungyong; Paik, Joonki
2015-12-01
This paper presents a regularized superresolution (SR) reconstruction method using locally adaptive multiple linear regression to overcome the limitation of spatial resolution of digital images. In order to make the SR problem better-posed, the proposed method incorporates the locally adaptive multiple linear regression into the regularization process as a local prior. The local regularization prior assumes that the target high-resolution (HR) pixel is generated by a linear combination of similar pixels in differently scaled patches and optimum weight parameters. In addition, we adapt a modified version of the nonlocal means filter as a smoothness prior to utilize the patch redundancy. Experimental results show that the proposed algorithm better restores HR images than existing state-of-the-art methods in the sense of the most objective measures in the literature.
General solutions of second-order linear difference equations of Euler type
Directory of Open Access Journals (Sweden)
Akane Hongyo
2017-01-01
Full Text Available The purpose of this paper is to give general solutions of linear difference equations which are related to the Euler-Cauchy differential equation \\(y^{\\prime\\prime}+(\\lambda/t^2y=0\\ or more general linear differential equations. We also show that the asymptotic behavior of solutions of the linear difference equations are similar to solutions of the linear differential equations.
Directory of Open Access Journals (Sweden)
Hussein Abdel-jaber
2015-10-01
Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.
Generalization in adaptation to stable and unstable dynamics.
Directory of Open Access Journals (Sweden)
Abdelhamid Kadiallah
Full Text Available Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization.
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Directory of Open Access Journals (Sweden)
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Robust Comparison of the Linear Model Structures in Self-tuning Adaptive Control
DEFF Research Database (Denmark)
Zhou, Jianjun; Conrad, Finn
1989-01-01
The Generalized Predictive Controller (GPC) is extended to the systems with a generalized linear model structure which contains a number of choices of linear model structures. The Recursive Prediction Error Method (RPEM) is used to estimate the unknown parameters of the linear model structures...... to constitute a GPC self-tuner. Different linear model structures commonly used are compared and evaluated by applying them to the extended GPC self-tuner as well as to the special cases of the GPC, the GMV and MV self-tuners. The simulation results show how the choice of model structure affects the input......-output behaviour of self-tuning controllers....
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Nguyen, Nhan
2013-01-01
This paper presents the optimal control modification for linear uncertain plants. The Lyapunov analysis shows that the modification parameter has a limiting value depending on the nature of the uncertainty. The optimal control modification exhibits a linear asymptotic property that enables it to be analyzed in a linear time invariant framework for linear uncertain plants. The linear asymptotic property shows that the closed-loop plants in the limit possess a scaled input-output mapping. Using this property, we can derive an analytical closed-loop transfer function in the limit as the adaptive gain tends to infinity. The paper revisits the Rohrs counterexample problem that illustrates the nature of non-robustness of model-reference adaptive control in the presence of unmodeled dynamics. An analytical approach is developed to compute exactly the modification parameter for the optimal control modification that stabilizes the plant in the Rohrs counterexample. The linear asymptotic property is also used to address output feedback adaptive control for non-minimum phase plants with a relative degree 1.
[Pregnancy in the context of general adaptation syndrome].
Gur'ianov, V A; Pyregov, A V; Tolmachev, G N; Volodin, A V
2007-01-01
Based on their own findings and the data available in the literature on pregnancy including that complicated by gestosis, the authors consider these conditions in the context of Selye's general adaptation syndrome. They identify its basic links (the autonomic nervous and cardiovascular systems) the function of which is affected by all the physiological and pathophysiological processes involved in its development. There is a high likelihood of baseline impaired adaption processes in these links, which may lead to an inability to accommodate (dysadaptation) by the moment of delivery. The paper gives the current interpretation of functional disorders, called Zangemeister'a triad in 1913, from the present-day points of view of the evaluation of pregnancy as the systemic inflammatory response syndrome and, probably, adaptation disease. Based on the results of analyzing the data available in the literature, the authors indicate physiologically the basic trends in the modulation of impaired development processes of the general adaptation syndrome towards the completion of pregnancy and surgical delivery.
Intelligent control of non-linear dynamical system based on the adaptive neurocontroller
Engel, E.; Kovalev, I. V.; Kobezhicov, V.
2015-10-01
This paper presents an adaptive neuro-controller for intelligent control of non-linear dynamical system. The formed as the fuzzy selective neural net the adaptive neuro-controller on the base of system's state, creates the effective control signal under random perturbations. The validity and advantages of the proposed adaptive neuro-controller are demonstrated by numerical simulations. The simulation results show that the proposed controller scheme achieves real-time control speed and the competitive performance, as compared to PID, fuzzy logic controllers.
A general algorithm for computing distance transforms in linear time
Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS
2000-01-01
A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the
Generalized Heisenberg algebra and (non linear) pseudo-bosons
Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.
2018-04-01
We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.
Adaptive LINE-P: An Adaptive Linear Energy Prediction Model for Wireless Sensor Network Nodes.
Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul
2018-04-05
In the context of wireless sensor networks, energy prediction models are increasingly useful tools that can facilitate the power management of the wireless sensor network (WSN) nodes. However, most of the existing models suffer from the so-called fixed weighting parameter, which limits their applicability when it comes to, e.g., solar energy harvesters with varying characteristics. Thus, in this article we propose the Adaptive LINE-P (all cases) model that calculates adaptive weighting parameters based on the stored energy profiles. Furthermore, we also present a profile compression method to reduce the memory requirements. To determine the performance of our proposed model, we have used real data for the solar and wind energy profiles. The simulation results show that our model achieves 90-94% accuracy and that the compressed method reduces memory overheads by 50% as compared to state-of-the-art models.
Energy Technology Data Exchange (ETDEWEB)
Yavari, M., E-mail: yavari@iaukashan.ac.ir [Islamic Azad University, Kashan Branch (Iran, Islamic Republic of)
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
Cheung, Y M; Leung, W M; Xu, L
1997-01-01
We propose a prediction model called Rival Penalized Competitive Learning (RPCL) and Combined Linear Predictor method (CLP), which involves a set of local linear predictors such that a prediction is made by the combination of some activated predictors through a gating network (Xu et al., 1994). Furthermore, we present its improved variant named Adaptive RPCL-CLP that includes an adaptive learning mechanism as well as a data pre-and-post processing scheme. We compare them with some existing models by demonstrating their performance on two real-world financial time series--a China stock price and an exchange-rate series of US Dollar (USD) versus Deutschmark (DEM). Experiments have shown that Adaptive RPCL-CLP not only outperforms the other approaches with the smallest prediction error and training costs, but also brings in considerable high profits in the trading simulation of foreign exchange market.
Linear-time general decoding algorithm for the surface code
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
The Morava E-theories of finite general linear groups
Mattafirri, Sara
block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.
A fully general and adaptive inverse analysis method for cementitious materials
DEFF Research Database (Denmark)
Jepsen, Michael S.; Damkilde, Lars; Lövgren, Ingemar
2016-01-01
The paper presents an adaptive method for inverse determination of the tensile σ - w relationship, direct tensile strength and Young’s modulus of cementitious materials. The method facilitates an inverse analysis with a multi-linear σ - w function. Usually, simple bi- or tri-linear functions...... are applied when modeling the fracture mechanisms in cementitious materials, but the vast development of pseudo-strain hardening, fiber reinforced cementitious materials require inverse methods, capable of treating multi-linear σ - w functions. The proposed method is fully general in the sense that it relies...... of notched specimens and simulated data from a nonlinear hinge model. The paper shows that the results obtained by means of the proposed method is independent on the initial shape of the σ - w function and the initial guess of the tensile strength. The method provides very accurate fits, and the increased...
Lei, Meizhen; Wang, Liqiang
2018-01-01
The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.
Nonlinear discrete-time multirate adaptive control of non-linear vibrations of smart beams
Georgiou, Georgios; Foutsitzi, Georgia A.; Stavroulakis, Georgios E.
2018-06-01
The nonlinear adaptive digital control of a smart piezoelectric beam is considered. It is shown that in the case of a sampled-data context, a multirate control strategy provides an appropriate framework in order to achieve vibration regulation, ensuring the stability of the whole control system. Under parametric uncertainties in the model parameters (damping ratios, frequencies, levels of non linearities and cross coupling, control input parameters), the scheme is completed with an adaptation law deduced from hyperstability concepts. This results in the asymptotic satisfaction of the control objectives at the sampling instants. Simulation results are presented.
Parallel adaptation of general three-dimensional hybrid meshes
International Nuclear Information System (INIS)
Kavouklis, Christos; Kallinderis, Yannis
2010-01-01
A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.
International Nuclear Information System (INIS)
Barr, D.S.
1993-01-01
It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems
International Nuclear Information System (INIS)
Barr, D.S.
1992-01-01
It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems
Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes
Kappus, Johanna
2012-01-01
For a Lévy process X having finite variation on compact sets and finite first moments, Âµ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of Âµ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
On-line validation of linear process models using generalized likelihood ratios
International Nuclear Information System (INIS)
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Adaptive matching of the iota ring linear optics for space charge compensation
Energy Technology Data Exchange (ETDEWEB)
Romanov, A. [Fermilab; Bruhwiler, D. L. [RadiaSoft, Boulder; Cook, N. [RadiaSoft, Boulder; Hall, C. [RadiaSoft, Boulder
2016-10-09
Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a search for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters
Sensitivity theory for general non-linear algebraic equations with constraints
International Nuclear Information System (INIS)
Oblow, E.M.
1977-04-01
Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems
Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.
2014-01-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Martini, Ruud; Kersten, P.H.M.
1983-01-01
Using 1-1 mappings, the complete symmetry groups of contact transformations of general linear second-order ordinary differential equations are determined from two independent solutions of those equations, and applied to the harmonic oscillator with and without damping.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Symmetry Adaptation of the Rotation-Vibration Theory for Linear Molecules
Directory of Open Access Journals (Sweden)
Katy L. Chubb
2018-04-01
Full Text Available A numerical application of linear-molecule symmetry properties, described by the D ∞ h point group, is formulated in terms of lower-order symmetry groups D n h with finite n. Character tables and irreducible representation transformation matrices are presented for D n h groups with arbitrary n-values. These groups can subsequently be used in the construction of symmetry-adapted ro-vibrational basis functions for solving the Schrödinger equations of linear molecules. Their implementation into the symmetrisation procedure based on a set of “reduced” vibrational eigenvalue problems with simplified Hamiltonians is used as a practical example. It is shown how the solutions of these eigenvalue problems can also be extended to include the classification of basis-set functions using ℓ, the eigenvalue (in units of ℏ of the vibrational angular momentum operator L ^ z . This facilitates the symmetry adaptation of the basis set functions in terms of the irreducible representations of D n h . 12 C 2 H 2 is used as an example of a linear molecule of D ∞ h point group symmetry to illustrate the symmetrisation procedure of the variational nuclear motion program Theoretical ROVibrational Energies (TROVE.
An adaptive noise cancelling system used for beam control at the Stanford Linear Accelerator Center
International Nuclear Information System (INIS)
Himel, T.; Allison, S.; Grossberg, P.; Hendrickson, L.; Sass, R.; Shoaee, H.
1993-06-01
The SLAC Linear Collider now has a total of twenty-four beam-steering feedback loops used to keep the electron and positron beams on their desired trajectories. Seven of these loops measure and control the same beam as it proceeds down the linac through the arcs to the final focus. Ideally by each loop should correct only for disturbances that occur between it and the immediate upstream loop. In fact, in the original system each loop corrected for all upstream disturbances. This resulted in undesirable over-correction and ringing. We added MIMO (Multiple Input Multiple Output) adaptive noise cancellers to separate the signal we wish to correct from disturbances further upstream. This adaptive control improved performance in the 1992 run
Directory of Open Access Journals (Sweden)
Yunfeng Wu
2014-01-01
Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.
Adaptive H∞ nonlinear velocity tracking using RBFNN for linear DC brushless motor
Tsai, Ching-Chih; Chan, Cheng-Kain; Li, Yi Yu
2012-01-01
This article presents an adaptive H ∞ nonlinear velocity control for a linear DC brushless motor. A simplified model of this motor with friction is briefly recalled. The friction dynamics is described by the Lu Gre model and the online tuning radial basis function neural network (RBFNN) is used to parameterise the nonlinear friction function and un-modelled errors. An adaptive nonlinear H ∞ control method is then proposed to achieve velocity tracking, by assuming that the upper bounds of the ripple force, the changeable load and the nonlinear friction can be learned by the RBFNN. The closed-loop system is proven to be uniformly bounded using the Lyapunov stability theory. The feasibility and the efficacy of the proposed control are exemplified by conducting two velocity tracking experiments.
A speed estimation unit for induction motors based on adaptive linear combiner
International Nuclear Information System (INIS)
Marei, Mostafa I.; Shaaban, Mostafa F.; El-Sattar, Ahmed A.
2009-01-01
This paper presents a new induction motor speed estimation technique, which can estimate the rotor resistance as well, from the measured voltage and current signals. Moreover, the paper utilizes a novel adaptive linear combiner (ADALINE) structure for speed and rotor resistance estimations. This structure can deal with the multi-output systems and it is called MO-ADALINE. The model of the induction motor is arranged in a linear form, in the stationary reference frame, to cope with the proposed speed estimator. There are many advantages of the proposed unit such as wide speed range capability, immunity against harmonics of measured waveforms, and precise estimation of the speed and the rotor resistance at different dynamic changes. Different types of induction motor drive systems are used to evaluate the dynamic performance and to examine the accuracy of the proposed unit for speed and rotor resistance estimation.
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Downie, John D.
1990-01-01
A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Hydrodynamics in full general relativity with conservative adaptive mesh refinement
East, William E.; Pretorius, Frans; Stephens, Branson C.
2012-06-01
There is great interest in numerical relativity simulations involving matter due to the likelihood that binary compact objects involving neutron stars will be detected by gravitational wave observatories in the coming years, as well as to the possibility that binary compact object mergers could explain short-duration gamma-ray bursts. We present a code designed for simulations of hydrodynamics coupled to the Einstein field equations targeted toward such applications. This code has recently been used to study eccentric mergers of black hole-neutron star binaries. We evolve the fluid conservatively using high-resolution shock-capturing methods, while the field equations are solved in the generalized-harmonic formulation with finite differences. In order to resolve the various scales that may arise, we use adaptive mesh refinement (AMR) with grid hierarchies based on truncation error estimates. A noteworthy feature of this code is the implementation of the flux correction algorithm of Berger and Colella to ensure that the conservative nature of fluid advection is respected across AMR boundaries. We present various tests to compare the performance of different limiters and flux calculation methods, as well as to demonstrate the utility of AMR flux corrections.
The General Adaptation Syndrome: Potential misapplications to resistance exercise.
Buckner, Samuel L; Mouser, J Grant; Dankel, Scott J; Jessee, Matthew B; Mattocks, Kevin T; Loenneke, Jeremy P
2017-11-01
Within the resistance training literature, one of the most commonly cited tenets with respect to exercise programming is the "General Adaptation Syndrome" (GAS). The GAS is cited as a central theory behind the periodization of resistance exercise. However, after examining the original stress research by Hans Selye, the applications of GAS to resistance exercise may not be appropriate. To examine the original work of Hans Selye, as well as the original papers through which the GAS was established as a central theory for periodized resistance exercise. We conducted a review of Selye's work on the GAS, as well as the foundational papers through which this concept was applied to resistance exercise. The work of Hans Selye focused on the universal physiological stress responses noted upon exposure to toxic levels of a variety of pharmacological agents and stimuli. The extrapolations that have been made to resistance exercise appear loosely based on this concept and may not be an appropriate basis for application of the GAS to resistance exercise. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Mortazavi, S.M.J.; Motamedifar, M.; Namdari, G.; Taheri, M.; Mortazavi, A.R.; Shokrpour, N.
2013-01-01
Substantial evidence indicates that adaptive response induced by low doses of ionizing radiation can result in resistance to the damage caused by a subsequently high-dose radiation or cause cross-resistance to other non-radiation stressors. Adaptive response contradicts the linear-non-threshold (LNT) dose-response model for ionizing radiation. We have previously reported that exposure of laboratory animals to radiofrequency radiation can induce a survival adaptive response. Furthermore, we ha...
Czech Academy of Sciences Publication Activity Database
Náhlík, Luboš; Šestáková, L.; Hutař, Pavel; Knésl, Zdeněk
2011-01-01
Roč. 452-453, - (2011), s. 445-448 ISSN 1013-9826 R&D Projects: GA AV ČR(CZ) KJB200410803; GA ČR GA101/09/1821 Institutional research plan: CEZ:AV0Z20410507 Keywords : generalized stress intensity factor * bimaterial interface * composite materials * strain energy density factor * fracture criterion * generalized linear elastic fracture mechanics Subject RIV: JL - Materials Fatigue, Friction Mechanics
International Nuclear Information System (INIS)
Frank, T.D.
2002-01-01
We study many particle systems in the context of mean field forces, concentration-dependent diffusion coefficients, generalized equilibrium distributions, and quantum statistics. Using kinetic transport theory and linear nonequilibrium thermodynamics we derive for these systems a generalized multivariate Fokker-Planck equation. It is shown that this Fokker-Planck equation describes relaxation processes, has stationary maximum entropy distributions, can have multiple stationary solutions and stationary solutions that differ from Boltzmann distributions
The theory of a general quantum system interacting with a linear dissipative system
International Nuclear Information System (INIS)
Feynman, R.P.; Vernon, F.L.
2000-01-01
A formalism has been developed, using Feynman's space-time formulation of nonrelativistic quantum mechanics whereby the behavior of a system of interest, which is coupled to other external quantum systems, may be calculated in terms of its own variables only. It is shown that the effect of the external systems in such a formalism can always be included in a general class of functionals (influence functionals) of the coordinates of the system only. The properties of influence functionals for general systems are examined. Then, specific forms of influence functionals representing the effect of definite and random classical forces, linear dissipative systems at finite temperatures, and combinations of these are analyzed in detail. The linear system analysis is first done for perfectly linear systems composed of combinations of harmonic oscillators, loss being introduced by continuous distributions of oscillators. Then approximately linear systems and restrictions necessary for the linear behavior are considered. Influence functionals for all linear systems are shown to have the same form in terms of their classical response functions. In addition, a fluctuation-dissipation theorem is derived relating temperature and dissipation of the linear system to a fluctuating classical potential acting on the system of interest which reduces to the Nyquist-Johnson relation for noise in the case of electric circuits. Sample calculations of transition probabilities for the spontaneous emission of an atom in free space and in a cavity are made. Finally, a theorem is proved showing that within the requirements of linearity all sources of noise or quantum fluctuation introduced by maser-type amplification devices are accounted for by a classical calculation of the characteristics of the maser
On Extended Exponential General Linear Methods PSQ with S>Q ...
African Journals Online (AJOL)
This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...
Directory of Open Access Journals (Sweden)
Jen-Yuan Chen
2014-01-01
Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
A generalized variational algebra and conserved densities for linear evolution equations
International Nuclear Information System (INIS)
Abellanas, L.; Galindo, A.
1978-01-01
The symbolic algebra of Gel'fand and Dikii is generalized to the case of n variables. Using this algebraic approach a rigorous characterization of the polynomial kernel of the variational derivative is given. This is applied to classify all the conservation laws for linear polynomial evolution equations of arbitrary order. (Auth.)
A differential-geometric approach to generalized linear models with grouped predictors
Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.
We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Adaptive Digital Predistortion Schemes to Linearize RF Power Amplifiers with Memory Effects
Institute of Scientific and Technical Information of China (English)
ZHANG Peng; WU Si-liang; ZHANG Qin
2008-01-01
To compensate for nonlinear distortion introduced by RF power amplifiers (PAs) with memory effects, two correlated models, namely an extended memory polynomial (EMP) model and a memory lookup table (LUT) model, are proposed for predistorter design. Two adaptive digital predistortion (ADPD) schemes with indirect learning architecture are presented. One adopts the EMP model and the recursive least square (RLS) algorithm, and the other utilizes the memory LUT model and the least mean square (LMS) algorithm. Simulation results demonstrate that the EMP-based ADPD yields the best linearization performance in terms of suppressing spectral regrowth. It is also shown that the ADPD based on memory LUT makes optimum tradeoff between performance and computational complexity.
Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties
Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui
2017-10-01
In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.
Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.
Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J
2017-09-01
Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Directory of Open Access Journals (Sweden)
Elias Giannakis
2016-10-01
Full Text Available The development of green space along urban rivers could mitigate urban heat island effects, enhance the physical and mental well-being of city dwellers, and improve flood resilience. A linear park has been recently created along the ephemeral Pedieos River in the urban area of Nicosia, Cyprus. Questionnaire surveys and micrometeorological measurements were conducted to explore people’s perceptions and satisfaction regarding the services of the urban park. People’s main reasons to visit the park were physical activity and exercise (67%, nature (13%, and cooling (4%. The micrometeorological measurements in and near the park revealed a relatively low cooling effect (0.5 °C of the park. However, the majority of the visitors (84% were satisfied or very satisfied with the cooling effect of the park. Logistic regression analysis indicated that the odds of individuals feeling very comfortable under a projected 3 °C future increase in temperature would be 0.34 times lower than the odds of feeling less comfortable. The discrepancies between the observed thermal comfort index and people’s perceptions revealed that people in semi-arid environments are adapted to the hot climatic conditions; 63% of the park visitors did not feel uncomfortable at temperatures between 27 °C and 37 °C. Further research is needed to assess other key ecosystems services of this urban green river corridor, such as flood protection, air quality regulation, and biodiversity conservation, to contribute to integrated climate change adaptation planning.
Evaluation of non-linear adaptive smoothing filter by digital phantom
International Nuclear Information System (INIS)
Sato, Kazuhiro; Ishiya, Hiroki; Oshita, Ryosuke; Yanagawa, Isao; Goto, Mitsunori; Mori, Issei
2008-01-01
As a result of the development of multi-slice CT, diagnoses based on three-dimensional reconstruction images and multi-planar reconstruction have spread. For these applications, which require high z-resolution, thin slice imaging is essential. However, because z-resolution is always based on a trade-off with image noise, thin slice imaging is necessarily accompanied by an increase in noise level. To improve the quality of thin slice images, a non-linear adaptive smoothing filter has been developed, and is being widely applied to clinical use. We developed a digital bar pattern phantom for the purpose of evaluating the effect of this filter and attempted evaluation from an addition image of the bar pattern phantom and the image of the water phantom. The effect of this filter was changed in a complex manner by the contrast and spatial frequency of the original image. We have confirmed the reduced effect of image noise in the low frequency component of the image, but decreased contrast or increased quantity of noise in the image of the high frequency component. This result represents the effect of change in the adaptation of this filter. The digital phantom was useful for this evaluation, but to understand the total effect of filtering, much improvement of the shape of the digital phantom is required. (author)
Tharrey, Marion; Olaya, Gilma A; Fewtrell, Mary; Ferguson, Elaine
2017-12-01
The aim of the study was to use linear programming (LP) analyses to adapt New Complementary Feeding Guidelines (NCFg) designed for infants aged 6 to 12 months living in poor socioeconomic circumstances in Bogota to ensure dietary adequacy for young children aged 12 to 23 months. A secondary data analysis was performed using dietary and anthropometric data collected from 12-month-old infants (n = 72) participating in a randomized controlled trial. LP analyses were performed to identify nutrients whose requirements were difficult to achieve using local foods as consumed; and to test and compare the NCFg and alternative food-based recommendations (FBRs) on the basis of dietary adequacy, for 11 micronutrients, at the population level. Thiamine recommended nutrient intakes for these young children could not be achieved given local foods as consumed. NCFg focusing only on meat, fruits, vegetables, and breast milk ensured dietary adequacy at the population level for only 4 micronutrients, increasing to 8 of 11 modelled micronutrients when the FBRs promoted legumes, dairy, vitamin A-rich vegetables, and chicken giblets. None of the FBRs tested ensured population-level dietary adequacy for thiamine, niacin, and iron unless a fortified infant food was recommended. The present study demonstrated the value of using LP to adapt NCFg for a different age group than the one for which they were designed. Our analyses suggest that to ensure dietary adequacy for 12- to 23-month olds these adaptations should include legumes, dairy products, vitamin A-rich vegetables, organ meat, and a fortified food.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Listening to a non-native speaker: Adaptation and generalization
Clarke, Constance M.
2004-05-01
Non-native speech can cause perceptual difficulty for the native listener, but experience can moderate this difficulty. This study explored the perceptual benefit of a brief (approximately 1 min) exposure to foreign-accented speech using a cross-modal word matching paradigm. Processing speed was tracked by recording reaction times (RTs) to visual probe words following English sentences produced by a Spanish-accented speaker. In experiment 1, RTs decreased significantly over 16 accented utterances and by the end were equal to RTs to a native voice. In experiment 2, adaptation to one Spanish-accented voice improved perceptual efficiency for a new Spanish-accented voice, indicating that abstract properties of accented speech are learned during adaptation. The control group in Experiment 2 also adapted to the accented voice during the test block, suggesting adaptation can occur within two to four sentences. The results emphasize the flexibility of the human speech processing system and the need for a mechanism to explain this adaptation in models of spoken word recognition. [Research supported by an NSF Graduate Research Fellowship and the University of Arizona Cognitive Science Program.] a)Currently at SUNY at Buffalo, Dept. of Psych., Park Hall, Buffalo, NY 14260, cclarke2@buffalo.edu
International Nuclear Information System (INIS)
Chang, H.; Weinberg, W.H.
1977-01-01
A generalized expression is developed that relates the ''reaction product vector'', epsilon exp(-iphi), to the kinetic parameters of a linear system. The formalism is appropriate for the analysis of modulated molecular beam mass spectrometry data and facilitates the correlation of experimental results to (proposed) linear models. A study of stability criteria appropriate for modulated molecular beam mass spectrometry experiments is also presented. This investigation has led to interesting inherent limitations which have not heretofore been emphasized, as well as a delineation of the conditions under which stable chemical oscillations may occur in the reacting system
An analogue of Morse theory for planar linear networks and the generalized Steiner problem
International Nuclear Information System (INIS)
Karpunin, G A
2000-01-01
A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set
Energy Technology Data Exchange (ETDEWEB)
Escane, J.M. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)
2005-04-01
The first part of this article defines the different elements of an electrical network and the models to represent them. Each model involves the current and the voltage as a function of time. Models involving time functions are simple but their use is not always easy. The Laplace transformation leads to a more convenient form where the variable is no more directly the time. This transformation leads also to the notion of transfer function which is the object of the second part. The third part aims at defining the fundamental operation rules of linear networks, commonly named 'general theorems': linearity principle and superimposition theorem, duality principle, Thevenin theorem, Norton theorem, Millman theorem, triangle-star and star-triangle transformations. These theorems allow to study complex power networks and to simplify the calculations. They are based on hypotheses, the first one is that all networks considered in this article are linear. (J.S.)
International Nuclear Information System (INIS)
Maldonado, G.I.; Turinsky, P.J.; Kropaczek, D.J.
1993-01-01
The computational capability of efficiently and accurately evaluate reactor core attributes (i.e., k eff and power distributions as a function of cycle burnup) utilizing a second-order accurate advanced nodal Generalized Perturbation Theory (GPT) model has been developed. The GPT model is derived from the forward non-linear iterative Nodal Expansion Method (NEM) strategy, thereby extending its inherent savings in memory storage and high computational efficiency to also encompass GPT via the preservation of the finite-difference matrix structure. The above development was easily implemented into the existing coarse-mesh finite-difference GPT-based in-core fuel management optimization code FORMOSA-P, thus combining the proven robustness of its adaptive Simulated Annealing (SA) multiple-objective optimization algorithm with a high-fidelity NEM GPT neutronics model to produce a powerful computational tool used to generate families of near-optimum loading patterns for PWRs. (orig.)
A general digital computer procedure for synthesizing linear automatic control systems
International Nuclear Information System (INIS)
Cummins, J.D.
1961-10-01
The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Linear relations in microbial reaction systems: a general overview of their origin, form, and use.
Noorman, H J; Heijnen, J J; Ch A M Luyben, K
1991-09-01
In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable.
ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose
2002-01-01
In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Synthesis of general linear networks using causal and J-isometric dilations
International Nuclear Information System (INIS)
D'Attellis, C.E.
1977-06-01
The problem of the synthesis of linear systems characterized by their scattering operator is studied. This problem is considered solved once an adequate dilation for the operator is obtained, from which the synthesis is performed following the method of Saeks (35) and Levan (19). Known results appear sistematized and generalized in this paper, obtaining an unique method of synthesis for different caterories of operators. (Author) [es
A General Construction of Linear Differential Equations with Solutions of Prescribed Properties
Czech Academy of Sciences Publication Activity Database
Neuman, František
2004-01-01
Roč. 17, č. 1 (2004), s. 71-76 ISSN 0893-9659 R&D Projects: GA AV ČR IAA1019902; GA ČR GA201/99/0295 Institutional research plan: CEZ:AV0Z1019905 Keywords : construction of linear differential equations * prescribed qualitative properties of solutions Subject RIV: BA - General Mathematics Impact factor: 0.414, year: 2004
Directory of Open Access Journals (Sweden)
Tsung-han Tsai
2013-05-01
Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.
International Nuclear Information System (INIS)
LaChapelle, J.
2004-01-01
A path integral is presented that solves a general class of linear second order partial differential equations with Dirichlet/Neumann boundary conditions. Elementary kernels are constructed for both Dirichlet and Neumann boundary conditions. The general solution can be specialized to solve elliptic, parabolic, and hyperbolic partial differential equations with boundary conditions. This extends the well-known path integral solution of the Schroedinger/diffusion equation in unbounded space. The construction is based on a framework for functional integration introduced by Cartier/DeWitt-Morette
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Directory of Open Access Journals (Sweden)
Mingwu Jin
2012-01-01
Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Adaptive tracking control of leader-following linear multi-agent systems with external disturbances
Lin, Hanquan; Wei, Qinglai; Liu, Derong; Ma, Hongwen
2016-10-01
In this paper, the consensus problem for leader-following linear multi-agent systems with external disturbances is investigated. Brownian motions are used to describe exogenous disturbances. A distributed tracking controller based on Riccati inequalities with an adaptive law for adjusting coupling weights between neighbouring agents is designed for leader-following multi-agent systems under fixed and switching topologies. In traditional distributed static controllers, the coupling weights depend on the communication graph. However, coupling weights associated with the feedback gain matrix in our method are updated by state errors between neighbouring agents. We further present the stability analysis of leader-following multi-agent systems with stochastic disturbances under switching topology. Most traditional literature requires the graph to be connected all the time, while the communication graph is only assumed to be jointly connected in this paper. The design technique is based on Riccati inequalities and algebraic graph theory. Finally, simulations are given to show the validity of our method.
Study on sampling of continuous linear system based on generalized Fourier transform
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Directory of Open Access Journals (Sweden)
S. Alonso-Quesada
2010-01-01
Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.
Métris, Aline; George, Susie M; Ropers, Delphine
2017-01-02
Addition of salt to food is one of the most ancient and most common methods of food preservation. However, little is known of how bacterial cells adapt to such conditions. We propose to use piecewise linear approximations to model the regulatory adaptation of Escherichiacoli to osmotic stress. We apply the method to eight selected genes representing the functions known to be at play during osmotic adaptation. The network is centred on the general stress response factor, sigma S, and also includes a module representing the catabolic repressor CRP-cAMP. Glutamate, potassium and supercoiling are combined to represent the intracellular regulatory signal during osmotic stress induced by salt. The output is a module where growth is represented by the concentration of stable RNAs and the transcription of the osmotic gene osmY. The time course of gene expression of transport of osmoprotectant represented by the symporter proP and of the osmY is successfully reproduced by the network. The behaviour of the rpoS mutant predicted by the model is in agreement with experimental data. We discuss the application of the model to food-borne pathogens such as Salmonella; although the genes considered have orthologs, it seems that supercoiling is not regulated in the same way. The model is limited to a few selected genes, but the regulatory interactions are numerous and span different time scales. In addition, they seem to be condition specific: the links that are important during the transition from exponential to stationary phase are not all needed during osmotic stress. This model is one of the first steps towards modelling adaptation to stress in food safety and has scope to be extended to other genes and pathways, other stresses relevant to the food industry, and food-borne pathogens. The method offers a good compromise between systems of ordinary differential equations, which would be unmanageable because of the size of the system and for which insufficient data are available
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
The potential in general linear electrodynamics. Causal structure, propagators and quantization
Energy Technology Data Exchange (ETDEWEB)
Siemssen, Daniel [Department of Mathematical Methods in Physics, Faculty of Physics, University of Warsaw (Poland); Pfeifer, Christian [Institute for Theoretical Physics, Leibniz Universitaet Hannover (Germany); Center of Applied Space Technology and Microgravity (ZARM), Universitaet Bremen (Germany)
2016-07-01
From an axiomatic point of view, the fundamental input for a theory of electrodynamics are Maxwell's equations dF=0 (or F=dA) and dH=J, and a constitutive law H=F, which relates the field strength 2-form F and the excitation 2-form H. In this talk we consider general linear electrodynamics, the theory of electrodynamics defined by a linear constitutive law. The best known application of this theory is the effective description of electrodynamics inside (linear) media (e.g. birefringence). We analyze the classical theory of the electromagnetic potential A before we use methods familiar from mathematical quantum field theory in curved spacetimes to quantize it. Our analysis of the classical theory contains the derivation of retarded and advanced propagators, the analysis of the causal structure on the basis of the constitutive law (instead of a metric) and a discussion of the classical phase space. This classical analysis sets the stage for the construction of the quantum field algebra and quantum states, including a (generalized) microlocal spectrum condition.
International Nuclear Information System (INIS)
Yan Zhenya; Yu Pei
2007-01-01
In this paper, we study chaos (lag) synchronization of a new LC chaotic system, which can exhibit not only a two-scroll attractor but also two double-scroll attractors for different parameter values, via three types of state feedback controls: (i) linear feedback control; (ii) adaptive feedback control; and (iii) a combination of linear feedback and adaptive feedback controls. As a consequence, ten families of new feedback control laws are designed to obtain global chaos lag synchronization for τ < 0 and global chaos synchronization for τ = 0 of the LC system. Numerical simulations are used to illustrate these theoretical results. Each family of these obtained feedback control laws, including two linear (adaptive) functions or one linear function and one adaptive function, is added to two equations of the LC system. This is simpler than the known synchronization controllers, which apply controllers to all equations of the LC system. Moreover, based on the obtained results of the LC system, we also derive the control laws for chaos (lag) synchronization of another new type of chaotic system
Non-cooperative stochastic differential game theory of generalized Markov jump linear systems
Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning
2017-01-01
This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...
International Nuclear Information System (INIS)
Speliotopoulos, A.D.; Chiao, Raymond Y.
2004-01-01
The coupling of gravity to matter is explored in the linearized gravity limit. The usual derivation of gravity-matter couplings within the quantum-field-theoretic framework is reviewed. A number of inconsistencies between this derivation of the couplings and the known results of tidal effects on test particles according to classical general relativity are pointed out. As a step towards resolving these inconsistencies, a general laboratory frame fixed on the worldline of an observer is constructed. In this frame, the dynamics of nonrelativistic test particles in the linearized gravity limit is studied, and their Hamiltonian dynamics is derived. It is shown that for stationary metrics this Hamiltonian reduces to the usual Hamiltonian for nonrelativistic particles undergoing geodesic motion. For nonstationary metrics with long-wavelength gravitational waves present (GWs), it reduces to the Hamiltonian for a nonrelativistic particle undergoing geodesic deviation motion. Arbitrary-wavelength GWs couple to the test particle through a vector-potential-like field N a , the net result of the tidal forces that the GW induces in the system, namely, a local velocity field on the system induced by tidal effects, as seen by an observer in the general laboratory frame. Effective electric and magnetic fields, which are related to the electric and magnetic parts of the Weyl tensor, are constructed from N a that obey equations of the same form as Maxwell's equations. A gedankin gravitational Aharonov-Bohm-type experiment using N a to measure the interference of quantum test particles is presented
Analysis of dental caries using generalized linear and count regression models
Directory of Open Access Journals (Sweden)
Javali M. Phil
2013-11-01
Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
A generalization of Dirac non-linear electrodynamics, and spinning charged particles
International Nuclear Information System (INIS)
Rodrigues Junior, W.A.; Vaz Junior, J.; Recami, E.
1992-08-01
The Dirac non-linear electrodynamics is generalized by introducing two potentials (namely, the vector potential a and the pseudo-vector potential γ 5 B of the electromagnetic theory with charges and magnetic monopoles), and by imposing the pseudoscalar part of the product W W * to BE zero, with W = A + γ 5 B. Also, is demonstrated that the field equations of such a theory posses a soliton-like solution which can represent a priori a charged particle. (L.C.J.A.)
Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter
International Nuclear Information System (INIS)
Shukla, A.; Peter, M.; Hoffmann, L.
1993-01-01
Two new approaches are used to analyze positron annihilation lifetime spectra. A general linear filter is designed to filter the noise from lifetime data. The quantified maximum entropy method is used to solve the inverse problem of finding the lifetimes and intensities present in data. We determine optimal values of parameters needed for fitting using Bayesian methods. Estimates of errors are provided. We present results on simulated and experimental data with extensive tests to show the utility of this method and compare it with other existing methods. (orig.)
General formulae for polarization observables in deuteron electrodisintegration and linear relations
International Nuclear Information System (INIS)
Arenhoevel, H.; Leidemann, W.; Tomusiak, E.L.
1993-01-01
Formal expressions are derived for all possible polarization observables in deuteron electrodisintegration with longitudinally polarized incoming electrons, oriented deuteron targets and polarization analysis of outgoing nucleons. They are given in terms of general structure functions which can be determined experimentally. These structure functions are Hermitean forms of the T-matrix elements which, in principle, allow the determination of all T-matrix elements up to an arbitrary common phase. Since the set of structure functions is overcomplete, linear relations among various structure functions exist which are derived explicitly
Adaptation of a general circulation model to ocean dynamics
Turner, R. E.; Rees, T. H.; Woodbury, G. E.
1976-01-01
A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
International Nuclear Information System (INIS)
Sanchez, Richard.
1975-11-01
The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr
Vector generalized linear and additive models with an implementation in R
Yee, Thomas W
2015-01-01
This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable. The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...
Prospects of measuring general Higgs couplings at e{sup +}e{sup -} linear colliders
Energy Technology Data Exchange (ETDEWEB)
Hagiwara, K. [KEK, Ibaraki (Japan). Theory Group; Ishihara, S. [KEK, Ibaraki (Japan). Theory Group; Department of Physics, Hyogo University of Education, 941-1 Shimokume, Yashiro, Kato, Hyogo 673-1494 (Japan); Kamoshita, J. [Department of Physics, Ochanomizu University, 2-1-1 Otsuka, Bunkyo, Tokyo 112-8610 (Japan); Kniehl, B.A. [II. Institut fuer Theoretische Physik, Universitaet Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany)
2000-06-01
We examine how accurately the general HZV couplings, with V=Z{gamma}, may be determined by studying e{sup +}e{sup -}{yields}Hf anti f processes at future e{sup +}e{sup -} linear colliders. By using the optimal-observable method, which makes use of all available experimental information, we find out which combinations of the various HZV coupling terms may be constrained most efficiently with high luminosity. We also assess the benefits of measuring the tau-lepton helicities, identifying the bottom-hadron charges, polarizing the electron beam and running at two different collider energies. The HZZ couplings are generally found to be well constrained, even without these options, while the HZ{gamma} couplings are not. The constraints on the latter may be significantly improved by beam polarization. (orig.)
Directory of Open Access Journals (Sweden)
Engin Cemal MENGÜÇ
2018-03-01
Full Text Available In this study, an adaptive noise cancellation (ANC system based on linear and widely linear (WL complex valued least mean square (LMS algorithms is designed for removing electrooculography (EOG artifacts from electroencephalography (EEG signals. The real valued EOG and EEG signals (Fp1 and Fp2 given in dataset are primarily expressed as a complex valued signal in the complex domain. Then, using the proposed ANC system, the EOG artifacts are eliminated in the complex domain from the EEG signals. Expression of these signals in the complex domain allows us to remove EOG artifacts from two EEG channels simultaneously. Moreover, in this study, it has been shown that the complex valued EEG signal exhibits noncircular behavior, and in the case, the WL-CLMS algorithm enhances the performance of the ANC system compared to real-valued LMS and CLMS algorithms. Simulation results support the proposed approach.
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Directory of Open Access Journals (Sweden)
A. Cabezas
2010-08-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Robust-BD Estimation and Inference for General Partially Linear Models
Directory of Open Access Journals (Sweden)
Chunming Zhang
2017-11-01
Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.
Directory of Open Access Journals (Sweden)
Andrea Nobili
2015-01-01
Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Ultra Linear Low-loss Varactors & Circuits for Adaptive RF Systems
Huang, C.
2010-01-01
With the evolution of wireless communication, varactors can play an important role in enabling adaptive transceivers as well as phase-diversity systems. This thesis presents various varactor diode-based circuit topologies that facilitate RF adaptivity. The proposed varactor configurations can act as
Galaxy bias and non-linear structure formation in general relativity
International Nuclear Information System (INIS)
Baldauf, Tobias; Seljak, Uroš; Senatore, Leonardo; Zaldarriaga, Matias
2011-01-01
Length scales probed by the large scale structure surveys are becoming closer and closer to the horizon scale. Further, it has been recently understood that non-Gaussianity in the initial conditions could show up in a scale dependence of the bias of galaxies at the largest possible distances. It is therefore important to take General Relativistic effects into account. Here we provide a General Relativistic generalization of the bias that is valid both for Gaussian and for non-Gaussian initial conditions. The collapse of objects happens on very small scales, while long-wavelength modes are always in the quasi linear regime. Around every small collapsing region, it is therefore possible to find a reference frame that is valid for arbitrary times and where the space time is almost flat: the Fermi frame. Here the Newtonian approximation is applicable and the equations of motion are the ones of the standard N-body codes. The effects of long-wavelength modes are encoded in the mapping from the cosmological frame to the local Fermi frame. At the level of the linear bias, the effect of the long-wavelength modes on the dynamics of the short scales is all encoded in the local curvature of the Universe, which allows us to define a General Relativistic generalization of the bias in the standard Newtonian setting. We show that the bias due to this effect goes to zero as the square of the ratio between the physical wavenumber and the Hubble scale for modes longer than the horizon, confirming the intuitive picture that modes longer than the horizon do not have any dynamical effect. On the other hand, the bias due to non-Gaussianities does not need to vanish for modes longer than the Hubble scale, and for non-Gaussianities of the local kind it goes to a constant. As a further application of our setup, we show that it is not necessary to perform large N-body simulations to extract information about long-wavelength modes: N-body simulations can be done on small scales and long
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Diagnostics for generalized linear hierarchical models in network meta-analysis.
Zhao, Hong; Hodges, James S; Carlin, Bradley P
2017-09-01
Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.
Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang
2015-11-17
Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience
Directory of Open Access Journals (Sweden)
Yan Chen
2017-01-01
Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.
Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai
2014-10-20
We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.
DEFF Research Database (Denmark)
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-05-01
We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.
DEFF Research Database (Denmark)
Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf
2017-01-01
Large data sets that originate from administrative or operational activity are increasingly used for statistical analysis as they often contain very precise information and a large number of observations. But there is evidence that some variables can be subject to severe misclassification...... or contain missing values. Given the size of the data, a flexible semiparametric misclassification model would be good choice but their use in practise is scarce. To close this gap a semiparametric model for the probability of observing labour market transitions is estimated using a sample of 20 m...... observations from Germany. It is shown that estimated marginal effects of a number of covariates are sizeably affected by misclassification and missing values in the analysis data. The proposed generalized partially linear regression extends existing models by allowing a misclassified discrete covariate...
Directory of Open Access Journals (Sweden)
Nurdan Cetin
2014-01-01
Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Wen-Min Zhou
2013-01-01
Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.
Hennelly, Bryan M.; Sheridan, John T.
2005-05-01
By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.
A distributed-memory hierarchical solver for general sparse linear systems
Energy Technology Data Exchange (ETDEWEB)
Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering
2017-12-20
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel
Kleinschmidt, Dave F.; Jaeger, T. Florian
2016-01-01
Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension. PMID:25844873
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Non-linear general instability of ring-stiffened conical shells under external hydrostatic pressure
International Nuclear Information System (INIS)
Ross, C T F; Kubelt, C; McLaughlin, I; Etheridge, A; Turner, K; Paraskevaides, D; Little, A P F
2011-01-01
The paper presents the experimental results for 15 ring-stiffened circular steel conical shells, which failed by non-linear general instability. The results of these investigations were compared with various theoretical analyses, including an ANSYS eigen buckling analysis and another ANSYS analysis; which involved a step-by-step method until collapse; where both material and geometrical nonlinearity were considered. The investigation also involved an analysis using BS5500 (PD 5500), together with the method of Ross of the University of Portsmouth. The ANSYS eigen buckling analysis tended to overestimate the predicted buckling pressures; whereas the ANSYS nonlinear results compared favourably with the experimental results. The PD5500 analysis was very time consuming and tended to grossly underestimate the experimental buckling pressures and in some cases, overestimate them. In contrast to PD5500 and ANSYS, the design charts of Ross of the University of Portsmouth were the easiest of all these methods to use and generally only slightly underestimated the experimental collapse pressures. The ANSYS analyses gave some excellent graphical displays.
Jamali, R. M. Jalal Uddin; Hashem, M. M. A.; Hasan, M. Mahfuz; Rahman, Md. Bazlar
2013-01-01
Solving a set of simultaneous linear equations is probably the most important topic in numerical methods. For solving linear equations, iterative methods are preferred over the direct methods especially when the coefficient matrix is sparse. The rate of convergence of iteration method is increased by using Successive Relaxation (SR) technique. But SR technique is very much sensitive to relaxation factor, {\\omega}. Recently, hybridization of classical Gauss-Seidel based successive relaxation t...
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Directory of Open Access Journals (Sweden)
Ana Calabrese
2011-01-01
Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
The Adapted Ordering Method for Lie algebras and superalgebras and their generalizations
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, Beatriz [Instituto de Matematicas y Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); NIKHEF-H, Kruislaan 409, NL-1098 SJ Amsterdam (Netherlands)
2008-02-01
In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras in two dimensions. It allows us to determine maximal dimensions for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this paper we present the Adapted Ordering Method for general Lie algebras and superalgebras and their generalizations, provided they can be triangulated. We also review briefly the results obtained for the Virasoro algebra and for the N = 2 and Ramond N = 1 superconformal algebras.
International Nuclear Information System (INIS)
Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta
2012-01-01
Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
DEFF Research Database (Denmark)
Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani
2014-01-01
This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...
An implicit adaptation algorithm for a linear model reference control system
Mabius, L.; Kaufman, H.
1975-01-01
This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.
International Nuclear Information System (INIS)
Ma, Yaping; Wei, Guo; Sun, Jinwei; Xiao, Yegui
2016-01-01
In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TS pca ) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Directory of Open Access Journals (Sweden)
Miguel Flores
2016-11-01
Full Text Available This work aims to classify the DNA sequences of healthy and malignant cancer respectively. For this, supervised and unsupervised classification methods from a functional context are used; i.e. each strand of DNA is an observation. The observations are discretized, for that reason different ways to represent these observations with functions are evaluated. In addition, an exploratory study is done: estimating the mean and variance of each functional type of cancer. For the unsupervised classification method, hierarchical clustering with different measures of functional distance is used. On the other hand, for the supervised classification method, a functional generalized linear model is used. For this model the first and second derivatives are used which are included as discriminating variables. It has been verified that one of the advantages of working in the functional context is to obtain a model to correctly classify cancers by 100%. For the implementation of the methods it has been used the fda.usc R package that includes all the techniques of functional data analysis used in this work. In addition, some that have been developed in recent decades. For more details of these techniques can be consulted Ramsay, J. O. and Silverman (2005 and Ferraty et al. (2006.
Population decoding of motor cortical activity using a generalized linear model with hidden states.
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam
2010-06-15
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A new approach in simulating RF linacs using a general, linear real-time signal processor
International Nuclear Information System (INIS)
Young, A.; Jachim, S.P.
1991-01-01
Strict requirements on the tolerances of the amplitude and phase of the radio frequency (RF) cavity field are necessary to advance the field of accelerator technology. Due to these stringent requirements upon modern accelerators,a new approach of modeling and simulating is essential in developing and understanding their characteristics. This paper describes the implementation of a general, linear model of an RF cavity which is used to develop a real-time signal processor. This device fully emulates the response of an RF cavity upon receiving characteristic parameters (Q 0 , ω 0 , Δω, R S , Z 0 ). Simulating an RF cavity with a real-time signal processor is beneficial to an accelerator designer because the device allows one to answer fundamental questions on the response of the cavity to a particular stimulus without operating the accelerator. In particular, the complex interactions between the RF power and the control systems, the beam and cavity fields can simply be observed in a real-time domain. The signal processor can also be used upon initialization of the accelerator as a diagnostic device and as a dummy load for determining the closed-loop error of the control system. In essence, the signal processor is capable of providing information that allows an operator to determine whether the control systems and peripheral devices are operating properly without going through the tedious procedure of running the beam through a cavity
Directory of Open Access Journals (Sweden)
Tülin Acar
2012-01-01
Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.
Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam
2010-01-01
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500
Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms
International Nuclear Information System (INIS)
Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.
2008-01-01
This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events
International Nuclear Information System (INIS)
Manrique, John Peter O.; Costa, Alessandro M.
2016-01-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)
Adaptions of ArcGIS' Linear Referencing System to the Coastal Environment
DEFF Research Database (Denmark)
Balstrøm, Thomas
2008-01-01
For many years it has been problematic to store information for the coastal environment in a GIS. However, a system named "Linear referencing System" based upon a dynamic segmentation principle implemented in ESRIs ArcGIS 9 software has now made it possible to store and analyze information...
Dos Santos, P Lopes; Deshpande, Sunil; Rivera, Daniel E; Azevedo-Perdicoúlis, T-P; Ramos, J A; Younger, Jarred
2013-12-31
There is good evidence that naltrexone, an opioid antagonist, has a strong neuroprotective role and may be a potential drug for the treatment of fibromyalgia. In previous work, some of the authors used experimental clinical data to identify input-output linear time invariant models that were used to extract useful information about the effect of this drug on fibromyalgia symptoms. Additional factors such as anxiety, stress, mood, and headache, were considered as additive disturbances. However, it seems reasonable to think that these factors do not affect the drug actuation, but only the way in which a participant perceives how the drug actuates on herself. Under this hypothesis the linear time invariant models can be replaced by State-Space Affine Linear Parameter Varying models where the disturbances are seen as a scheduling signal signal only acting at the parameters of the output equation. In this paper a new algorithm for identifying such a model is proposed. This algorithm minimizes a quadratic criterion of the output error. Since the output error is a linear function of some parameters, the Affine Linear Parameter Varying system identification is formulated as a separable nonlinear least squares problem. Likewise other identification algorithms using gradient optimization methods several parameter derivatives are dynamical systems that must be simulated. In order to increase time efficiency a canonical parametrization that minimizes the number of systems to be simulated is chosen. The effectiveness of the algorithm is assessed in a case study where an Affine Parameter Varying Model is identified from the experimental data used in the previous study and compared with the time-invariant model.
Energy Technology Data Exchange (ETDEWEB)
Xu Yuhua, E-mail: yuhuaxu2004@163.co [College of Information Science and Technology, Donghua University, Shanghai 201620 (China) and Department of Maths, Yunyang Teacher' s College, Hubei 442000 (China); Zhou Wuneng, E-mail: wnzhou@163.co [College of Information Science and Technology, Donghua University, Shanghai 201620 (China); Fang Jian' an [College of Information Science and Technology, Donghua University, Shanghai 201620 (China); Lu Hongqian [Shandong Institute of Light Industry, Shandong Jinan 250353 (China)
2009-12-28
This Letter proposes an approach to identify the topological structure and unknown parameters for uncertain general complex networks simultaneously. By designing effective adaptive controllers, we achieve synchronization between two complex networks. The unknown network topological structure and system parameters of uncertain general complex dynamical networks are identified simultaneously in the process of synchronization. Several useful criteria for synchronization are given. Finally, an illustrative example is presented to demonstrate the application of the theoretical results.
International Nuclear Information System (INIS)
Xu Yuhua; Zhou Wuneng; Fang Jian'an; Lu Hongqian
2009-01-01
This Letter proposes an approach to identify the topological structure and unknown parameters for uncertain general complex networks simultaneously. By designing effective adaptive controllers, we achieve synchronization between two complex networks. The unknown network topological structure and system parameters of uncertain general complex dynamical networks are identified simultaneously in the process of synchronization. Several useful criteria for synchronization are given. Finally, an illustrative example is presented to demonstrate the application of the theoretical results.
Energy Technology Data Exchange (ETDEWEB)
Wang, Shi-bing, E-mail: wang-shibing@dlut.edu.cn, E-mail: wangxy@dlut.edu.cn [School of Computer and Information Engineering, Fuyang Normal University, Fuyang 236041 (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, Xing-yuan, E-mail: wang-shibing@dlut.edu.cn, E-mail: wangxy@dlut.edu.cn [Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024 (China); Wang, Xiu-you [School of Computer and Information Engineering, Fuyang Normal University, Fuyang 236041 (China); Zhou, Yu-fei [College of Electrical Engineering and Automation, Anhui University, Hefei 230601 (China)
2016-04-15
With comprehensive consideration of generalized synchronization, combination synchronization and adaptive control, this paper investigates a novel adaptive generalized combination complex synchronization (AGCCS) scheme for different real and complex nonlinear systems with unknown parameters. On the basis of Lyapunov stability theory and adaptive control, an AGCCS controller and parameter update laws are derived to achieve synchronization and parameter identification of two real drive systems and a complex response system, as well as two complex drive systems and a real response system. Two simulation examples, namely, ACGCS for chaotic real Lorenz and Chen systems driving a hyperchaotic complex Lü system, and hyperchaotic complex Lorenz and Chen systems driving a real chaotic Lü system, are presented to verify the feasibility and effectiveness of the proposed scheme.
International Nuclear Information System (INIS)
Coste, Ph.; Aubert, J.; Lejeune, C.
1991-01-01
The extensive development of ion beam technologies in the last years, in particular for thin film deposition and etching, poses the problem of predicting the behaviour of the ion beam from convenient models. One of the existing models, the 'perfect linear model', is easy to use and provides information about the geometrical parameters of the ion beam envelope. In this model, however, the plasma potential must be close to the plasma electrode potential. Now, ion sources with electrostatic containment of the ionizing electrons -very attractive because of their improved ionization efficiency - have a plasma potential higher than the plasma electrode potential. Thus, a space-charge sheath with a non-negligible thickness exists, which modifies the equilibrium conditions of the plasma meniscus and, therefore, the initial divergence of the ion beam. In this paper an adaptation of the perfect linear model for ion beam formation to the case of plasma sources with electron electrostatic containment is presented. (author)
Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.
Fu, Yue; Fu, Jun; Chai, Tianyou
2015-12-01
In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.
DEFF Research Database (Denmark)
Bergami, Leonardo; Poulsen, Niels Kjølstad
2015-01-01
The paper proposes a smart rotor configuration where adaptive trailing edge flaps (ATEFs) are employed for active alleviation of the aerodynamic loads on the blades of the NREL 5 MW reference turbine. The flaps extend for 20% of the blade length and are controlled by a linear quadratic (LQ....... The effects of active flap control are assessed with aeroelastic simulations of the turbine in normal operation conditions, as prescribed by the International Electrotechnical Commission standard. The turbine lifetime fatigue damage equivalent loads provide a convenient summary of the results achieved...
International Nuclear Information System (INIS)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-01-01
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
Davis, Laurie Laughlin
2004-01-01
Choosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline…
Directory of Open Access Journals (Sweden)
Wanfang Shen
2012-01-01
Full Text Available The mathematical formulation for a quadratic optimal control problem governed by a linear quasiparabolic integrodifferential equation is studied. The control constrains are given in an integral sense: Uad={u∈X;∫ΩUu⩾0, t∈[0,T]}. Then the a posteriori error estimates in L∞(0,T;H1(Ω-norm and L2(0,T;L2(Ω-norm for both the state and the control approximation are given.
Predicting stem borer density in maize using RapidEye data and generalized linear models
Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le
2017-05-01
Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M
2017-06-01
Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Robinson, Tyler D.; Crisp, David
2018-05-01
Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
Directory of Open Access Journals (Sweden)
Ana Milstein
1979-01-01
Full Text Available The vertical distribution of each developmental stage of Paracalanus crassirostris was studied in a shallow water station at Ubatuba, SP, Brazil (23º30'S-45º07'W. Samples were collected monthly at the surface, 2m and near bottom levels . Salinity, temperature, dissolved oxygen, tide height, light penetration arid solar radiation were also recorded. Data were analysed by the general linear model. It showed that the amount of individuals at any developmental stage is affected diversely by hour, depth, hour-depth interaction and environmental factors throughout the year and that these effects are stronger in summer. All developmental stages were spread in the water column showing no regular vertical migrations. On the other hand, the number of organisms caught in a particular hour seemed to dependmore on the tide than on the animals behaviour. The results of the present paper showed, as observed by some other authors, the lack of vertical migration of a coastal copepod which is a grazer of fine particles throughout its life.A distribuição vertical dos diferentes estádios de desenvolvimento de P. crassirostris foi estudada durante um ano (junho 1976 - maio 1977, numa estação pouco profunda (5 m em Ubatuba. As amostras foram coletadas mensalmente, em tres profundidades, cada quatro horas, com garrafa van Dorn de 9 l registrando-se dados ambientais. Os dados foram processados com a técnica dos Mínimos Quadrados, na forma de uma Aralise de Regressão de um Modelo Linear que inclui covariáveis. O modelo foi construído a priori, considerando densidade de organismos por amostra, fatores ambientais, diferenças entre amostras procedentes de diferentes profundidades e horas, também como interações entre hora e profundidade. Para cada estádio de P. crassirostris, o modelo foi repetido 9 vezes, com os dados de dois meses cada vez, a fim de obter a variação das respostas no ano. Os resultados do modelo indicaram que a quantidade de indiv
Directory of Open Access Journals (Sweden)
Bahita Mohamed
2011-01-01
Full Text Available In this work, we introduce an adaptive neural network controller for a class of nonlinear systems. The approach uses two Radial Basis Functions, RBF networks. The first RBF network is used to approximate the ideal control law which cannot be implemented since the dynamics of the system are unknown. The second RBF network is used for on-line estimating the control gain which is a nonlinear and unknown function of the states. The updating laws for the combined estimator and controller are derived through Lyapunov analysis. Asymptotic stability is established with the tracking errors converging to a neighborhood of the origin. Finally, the proposed method is applied to control and stabilize the inverted pendulum system.
Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Caillet, Vincent; Hewson, Emily; Poulsen, Per Rugaard; Bromley, Regina; Bell, Linda; Eade, Thomas; Kneebone, Andrew; Martin, Jarad; Booth, Jeremy T
2018-04-01
Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and investigate real-time IGART using a standard linear accelerator. We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured. Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87-100%). The geometric accuracy of the KIM system was -0.1 ± 0.4, 0.2 ± 0.2 and -0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose. The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option. Copyright © 2018 Elsevier B.V. All rights reserved.
General methods for determining the linear stability of coronal magnetic fields
Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.
1988-01-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.
Adaptive vision-based control of an unmanned aerial vehicle without linear velocity measurements.
Jabbari Asl, Hamed; Yoon, Jungwon
2016-11-01
In this paper, an image-based visual servo controller is designed for an unmanned aerial vehicle. The main objective is to use flow of image features as the velocity cue to compensate for the low quality of linear velocity information obtained from accelerometers. Nonlinear observers are designed to estimate this flow. The proposed controller is bounded, which can help to keep the target points in the field of view of the camera. The main advantages over the previous full dynamic observer-based methods are that, the controller is robust with respect to unknown image depth, and also no yaw information is required. The complete stability analysis is presented and asymptotic convergence of the error signals is guaranteed. Simulation results show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Yoo, Yun Joo; Sun, Lei; Poirier, Julia G; Paterson, Andrew D; Bull, Shelley B
2017-02-01
By jointly analyzing multiple variants within a gene, instead of one at a time, gene-based multiple regression can improve power, robustness, and interpretation in genetic association analysis. We investigate multiple linear combination (MLC) test statistics for analysis of common variants under realistic trait models with linkage disequilibrium (LD) based on HapMap Asian haplotypes. MLC is a directional test that exploits LD structure in a gene to construct clusters of closely correlated variants recoded such that the majority of pairwise correlations are positive. It combines variant effects within the same cluster linearly, and aggregates cluster-specific effects in a quadratic sum of squares and cross-products, producing a test statistic with reduced degrees of freedom (df) equal to the number of clusters. By simulation studies of 1000 genes from across the genome, we demonstrate that MLC is a well-powered and robust choice among existing methods across a broad range of gene structures. Compared to minimum P-value, variance-component, and principal-component methods, the mean power of MLC is never much lower than that of other methods, and can be higher, particularly with multiple causal variants. Moreover, the variation in gene-specific MLC test size and power across 1000 genes is less than that of other methods, suggesting it is a complementary approach for discovery in genome-wide analysis. The cluster construction of the MLC test statistics helps reveal within-gene LD structure, allowing interpretation of clustered variants as haplotypic effects, while multiple regression helps to distinguish direct and indirect associations. © 2016 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Dauda GuliburYAKUBU
2012-12-01
Full Text Available Accurate solutions to initial value systems of ordinary differential equations may be approximated efficiently by Runge-Kutta methods or linear multistep methods. Each of these has limitations of one sort or another. In this paper we consider, as a middle ground, the derivation of continuous general linear methods for solution of stiff systems of initial value problems in ordinary differential equations. These methods are designed to combine the advantages of both Runge-Kutta and linear multistep methods. Particularly, methods possessing the property of A-stability are identified as promising methods within this large class of general linear methods. We show that the continuous general linear methods are self-starting and have more ability to solve the stiff systems of ordinary differential equations, than the discrete ones. The initial value systems of ordinary differential equations are solved, for instance, without looking for any other method to start the integration process. This desirable feature of the proposed approach leads to obtaining very high accuracy of the solution of the given problem. Illustrative examples are given to demonstrate the novelty and reliability of the methods.
Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.
2017-12-01
Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with
DEFF Research Database (Denmark)
Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.
2017-01-01
Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...
Raymond L. Czaplewski
1973-01-01
A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Impact of co-channel interference on the performance of adaptive generalized transmit beamforming
Radaydeh, Redha Mahmoud Mesleh
2011-08-01
The impact of co-channel interference on the performance of adaptive generalized transmit beamforming for low-complexity multiple-input single-output (MISO) configuration is investigated. The transmit channels are assumed to be sufficiently separated and undergo Rayleigh fading conditions. Due to the limited space, a single receive antenna is employed to capture desired user transmission. The number of active transmit channels is adjusted adaptively based on statistically unordered and/or ordered instantaneous signal-to-noise ratios (SNRs), where the transmitter has no information about the statistics of undesired signals. The adaptation thresholds are identified to guarantee a target performance level, and the adaptation schemes with enhanced spectral efficiency or power efficiency are studied and their performance are compared under various channels conditions. To facilitate comparison studies, results for the statistics of instantaneous combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. The statistics for combined SNR and combined SINR are then used to quantify various performance measures, considering the impact of non-ideal estimation of the desired user channel state information (CSI) and the randomness in the number of active interferers. Numerical and simulation comparisons for the achieved performance of the adaptation schemes are presented. © 2006 IEEE.
The General Adaptation Syndrome: A Foundation for the Concept of Periodization.
Cunanan, Aaron J; DeWeese, Brad H; Wagle, John P; Carroll, Kevin M; Sausaman, Robert; Hornsby, W Guy; Haff, G Gregory; Triplett, N Travis; Pierce, Kyle C; Stone, Michael H
2018-04-01
Recent reviews have attempted to refute the efficacy of applying Selye's general adaptation syndrome (GAS) as a conceptual framework for the training process. Furthermore, the criticisms involved are regularly used as the basis for arguments against the periodization of training. However, these perspectives fail to consider the entirety of Selye's work, the evolution of his model, and the broad applications he proposed. While it is reasonable to critically evaluate any paradigm, critics of the GAS have yet to dismantle the link between stress and adaptation. Disturbance to the state of an organism is the driving force for biological adaptation, which is the central thesis of the GAS model and the primary basis for its application to the athlete's training process. Despite its imprecisions, the GAS has proven to be an instructive framework for understanding the mechanistic process of providing a training stimulus to induce specific adaptations that result in functional enhancements. Pioneers of modern periodization have used the GAS as a framework for the management of stress and fatigue to direct adaptation during sports training. Updates to the periodization concept have retained its founding constructs while explicitly calling for scientifically based, evidence-driven practice suited to the individual. Thus, the purpose of this review is to provide greater clarity on how the GAS serves as an appropriate mechanistic model to conceptualize the periodization of training.
Directory of Open Access Journals (Sweden)
Shibing Wang
2016-02-01
Full Text Available This paper introduces a new memristor-based hyperchaotic complex Lü system (MHCLS and investigates its adaptive complex generalized synchronization (ACGS. Firstly, the complex system is constructed based on a memristor-based hyperchaotic real Lü system, and its properties are analyzed theoretically. Secondly, its dynamical behaviors, including hyperchaos, chaos, transient phenomena, as well as periodic behaviors, are explored numerically by means of bifurcation diagrams, Lyapunov exponents, phase portraits, and time history diagrams. Thirdly, an adaptive controller and a parameter estimator are proposed to realize complex generalized synchronization and parameter identification of two identical MHCLSs with unknown parameters based on Lyapunov stability theory. Finally, the numerical simulation results of ACGS and its applications to secure communication are presented to verify the feasibility and effectiveness of the proposed method.
Wang, Bing; Ninomiya, Yasuharu; Tanaka, Kaoru; Maruyama, Kouichi; Varès, Guillaume; Eguchi-Kasai, Kiyomi; Nenoi, Mitsuru
2012-12-01
Adaptive response (AR) of low linear energy transfer (LET) irradiations for protection against teratogenesis induced by high LET irradiations is not well documented. In this study, induction of AR by X-rays against teratogenesis induced by accelerated heavy ions was examined in fetal mice. Irradiations of pregnant C57BL/6J mice were performed by delivering a priming low dose from X-rays at 0.05 or 0.30 Gy on gestation day 11 followed one day later by a challenge high dose from either X-rays or accelerated heavy ions. Monoenergetic beams of carbon, neon, silicon, and iron with the LET values of about 15, 30, 55, and 200 keV/μm, respectively, were examined. Significant suppression of teratogenic effects (fetal death, malformation of live fetuses, or low body weight) was used as the endpoint for judgment of a successful AR induction. Existence of AR induced by low-LET X-rays against teratogenic effect induced by high-LET accelerated heavy ions was demonstrated. The priming low dose of X-rays significantly reduced the occurrence of prenatal fetal death, malformation, and/or low body weight induced by the challenge high dose from either X-rays or accelerated heavy ions of carbon, neon or silicon but not iron particles. Successful AR induction appears to be a radiation quality event, depending on the LET value and/or the particle species of the challenge irradiations. These findings would provide a new insight into the study on radiation-induced AR in utero. © 2012 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Sambou, Soussou
2004-01-01
In flood forecasting modelling, large basins are often considered as hydrological systems with multiple inputs and one output. Inputs are hydrological variables such rainfall, runoff and physical characteristics of basin; output is runoff. Relating inputs to output can be achieved using deterministic, conceptual, or stochastic models. Rainfall runoff models generally lack of accuracy. Physical hydrological processes based models, either deterministic or conceptual are highly data requirement demanding and by the way very complex. Stochastic multiple input-output models, using only historical chronicles of hydrological variables particularly runoff are by the way very popular among the hydrologists for large river basin flood forecasting. Application is made on the Senegal River upstream of Bakel, where the River is formed by the main branch, Bafing, and two tributaries, Bakoye and Faleme; Bafing being regulated by Manantaly Dam. A three inputs and one output model has been used for flood forecasting on Bakel. Influence of the lead forecasting, and of the three inputs taken separately, then associated two by two, and altogether has been verified using a dimensionless variance as criterion of quality. Inadequacies occur generally between model output and observations; to put model in better compliance with current observations, we have compared four parameter updating procedure, recursive least squares, Kalman filtering, stochastic gradient method, iterative method, and an AR errors forecasting model. A combination of these model updating have been used in real time flood forecasting.(Author)
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
Energy Technology Data Exchange (ETDEWEB)
Hasanien, Hany M., E-mail: Hanyhasanien@ieee.or [Dept. of Elec. Power and Machines, Faculty of Eng., Ain-shams Univ. Cairo (Egypt); Muyeen, S.M. [Department of Electrical Engineering, Petroleum Institute, Abu Dhabi (United Arab Emirates); Tamura, Junji [Department of EEE, Kitami Institute of Technology, 165 Koen Cho, Kitami 090-8507, Hokkaido (Japan)
2010-12-15
This paper presents a novel adaptive neuro-fuzzy controller applies on transverse flux linear motor for controlling its speed. The proposed controller presents fuzzy logic controller with self tuning scaling factors based on artificial neural network structure. It has two input variables and one control output variable. Firstly the fuzzy logic control rules are described then NN architecture is represented to self tune the output scaling factors of the controller. The application of this control technique represents the novelty of work, where this algorithm has so far not been stated before for this type of drives. This methodology solves the problem of nonlinearities and load changes of TFLM drives. The dynamic response of the motor is studied under the rated load condition as well as load disturbances. The proposed controller ensures fast and accurate dynamic response with an excellent steady state performance. The dynamic response of the motor with the proposed controller is compared with PI and adaptive NN controllers. It is found that the proposed controller gives better and faster response from the viewpoint of overshoot and settling time. Matlab/Simulink tool is used for this dynamic simulation study.
Linear-algebraic approach to electron-molecule collisions: General formulation
International Nuclear Information System (INIS)
Collins, L.A.; Schneider, B.I.
1981-01-01
We present a linear-algebraic approach to electron-molecule collisions based on an integral equations form with either logarithmic or asymptotic boundary conditions. The introduction of exchange effects does not alter the basic form or order of the linear-algebraic equations for a local potential. In addition to the standard procedure of directly evaluating the exchange integrals by numerical quadrature, we also incorporate exchange effects through a separable-potential approximation. Efficient schemes are developed for reducing the number of points and channels that must be included. The method is applied at the static-exchange level to a number of molecular systems including H 2 , N 2 , LiH, and CO 2
Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing
International Nuclear Information System (INIS)
Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria
2014-01-01
In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models
International Nuclear Information System (INIS)
VanMeter, N. M.; Lougovski, P.; Dowling, Jonathan P.; Uskov, D. B.; Kieling, K.; Eisert, J.
2007-01-01
We introduce schemes for linear-optical quantum state generation. A quantum state generator is a device that prepares a desired quantum state using product inputs from photon sources, linear-optical networks, and postselection using photon counters. We show that this device can be concisely described in terms of polynomial equations and unitary constraints. We illustrate the power of this language by applying the Groebner-basis technique along with the notion of vacuum extensions to solve the problem of how to construct a quantum state generator analytically for any desired state, and use methods of convex optimization to identify bounds to success probabilities. In particular, we disprove a conjecture concerning the preparation of the maximally path-entangled |n,0>+|0,n> (NOON) state by providing a counterexample using these methods, and we derive a new upper bound on the resources required for NOON-state generation
Aldao, Amelia; Mennin, Douglas S
2012-02-01
Recent models of generalized anxiety disorder (GAD) have expanded on Borkovec's avoidance theory by delineating emotion regulation deficits associated with the excessive worry characteristic of this disorder (see Behar, DiMarco, Hekler, Mohlman, & Staples, 2009). However, it has been difficult to determine whether emotion regulation is simply a useful heuristic for the avoidant properties of worry or an important extension to conceptualizations of GAD. Some of this difficulty may arise from a focus on purported maladaptive regulation strategies, which may be confounded with symptomatic distress components of the disorder (such as worry). We examined the implementation of adaptive regulation strategies by participants with and without a diagnosis of GAD while watching emotion-eliciting film clips. In a between-subjects design, participants were randomly assigned to accept, reappraise, or were not given specific regulation instructions. Implementation of adaptive regulation strategies produced differential effects in the physiological (but not subjective) domain across diagnostic groups. Whereas participants with GAD demonstrated lower cardiac flexibility when implementing adaptive regulation strategies than when not given specific instructions on how to regulate, healthy controls showed the opposite pattern, suggesting they benefited from the use of adaptive regulation strategies. We discuss the implications of these findings for the delineation of emotion regulation deficits in psychopathology. Copyright © 2011 Elsevier Ltd. All rights reserved.
Non-linear partial differential equations an algebraic view of generalized solutions
Rosinger, Elemer E
1990-01-01
A massive transition of interest from solving linear partial differential equations to solving nonlinear ones has taken place during the last two or three decades. The availability of better computers has often made numerical experimentations progress faster than the theoretical understanding of nonlinear partial differential equations. The three most important nonlinear phenomena observed so far both experimentally and numerically, and studied theoretically in connection with such equations have been the solitons, shock waves and turbulence or chaotical processes. In many ways, these phenomen
Continuity and general perturbation of the Drazin inverse for closed linear operators
Directory of Open Access Journals (Sweden)
N. Castro González
2002-01-01
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models
DEFF Research Database (Denmark)
Lanne, Markku; Nyberg, Henri
We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...
Generalized linear differential equations in a Banach space : continuous dependence on a parameter
Czech Academy of Sciences Publication Activity Database
Monteiro, G.A.; Tvrdý, Milan
2013-01-01
Roč. 33, č. 1 (2013), s. 283-303 ISSN 1078-0947 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized differential equations * continuous dependence * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics Impact factor: 0.923, year: 2013 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=7615
International Nuclear Information System (INIS)
Wu Xiangjun; Lu Hongtao
2011-01-01
Highlights: → Adaptive generalized function projective lag synchronization (AGFPLS) is proposed. → Two uncertain chaos systems are lag synchronized up to a scaling function matrix. → The synchronization speed is sensitively influenced by the control gains. → The AGFPLS scheme is robust against noise perturbation. - Abstract: In this paper, a novel projective synchronization scheme called adaptive generalized function projective lag synchronization (AGFPLS) is proposed. In the AGFPLS method, the states of two different chaotic systems with fully uncertain parameters are asymptotically lag synchronized up to a desired scaling function matrix. By means of the Lyapunov stability theory, an adaptive controller with corresponding parameter update rule is designed for achieving AGFPLS between two diverse chaotic systems and estimating the unknown parameters. This technique is employed to realize AGFPLS between uncertain Lue chaotic system and uncertain Liu chaotic system, and between Chen hyperchaotic system and Lorenz hyperchaotic system with fully uncertain parameters, respectively. Furthermore, AGFPLS between two different uncertain chaotic systems can still be achieved effectively with the existence of noise perturbation. The corresponding numerical simulations are performed to demonstrate the validity and robustness of the presented synchronization method.
Yang, S; Wang, D
2000-01-01
This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.
MRSA model of learning and adaptation: a qualitative study among the general public
2012-01-01
Background More people in the US now die from Methicillin Resistant Staphylococcus aureus (MRSA) infections than from HIV/AIDS. Often acquired in healthcare facilities or during healthcare procedures, the extremely high incidence of MRSA infections and the dangerously low levels of literacy regarding antibiotic resistance in the general public are on a collision course. Traditional medical approaches to infection control and the conventional attitude healthcare practitioners adopt toward public education are no longer adequate to avoid this collision. This study helps us understand how people acquire and process new information and then adapt behaviours based on learning. Methods Using constructivist theory, semi-structured face-to-face and phone interviews were conducted to gather pertinent data. This allowed participants to tell their stories so their experiences could deepen our understanding of this crucial health issue. Interview transcripts were analysed using grounded theory and sensitizing concepts. Results Our findings were classified into two main categories, each of which in turn included three subthemes. First, in the category of Learning, we identified how individuals used their Experiences with MRSA, to answer the questions: What was learned? and, How did learning occur? The second category, Adaptation gave us insights into Self-reliance, Reliance on others, and Reflections on the MRSA journey. Conclusions This study underscores the critical importance of educational programs for patients, and improved continuing education for healthcare providers. Five specific results of this study can reduce the vacuum that currently exists between the knowledge and information available to healthcare professionals, and how that information is conveyed to the public. These points include: 1) a common model of MRSA learning and adaptation; 2) the self-directed nature of adult learning; 3) the focus on general MRSA information, care and prevention, and antibiotic
MRSA model of learning and adaptation: a qualitative study among the general public
Directory of Open Access Journals (Sweden)
Rohde Rodney E
2012-04-01
Full Text Available Abstract Background More people in the US now die from Methicillin Resistant Staphylococcus aureus (MRSA infections than from HIV/AIDS. Often acquired in healthcare facilities or during healthcare procedures, the extremely high incidence of MRSA infections and the dangerously low levels of literacy regarding antibiotic resistance in the general public are on a collision course. Traditional medical approaches to infection control and the conventional attitude healthcare practitioners adopt toward public education are no longer adequate to avoid this collision. This study helps us understand how people acquire and process new information and then adapt behaviours based on learning. Methods Using constructivist theory, semi-structured face-to-face and phone interviews were conducted to gather pertinent data. This allowed participants to tell their stories so their experiences could deepen our understanding of this crucial health issue. Interview transcripts were analysed using grounded theory and sensitizing concepts. Results Our findings were classified into two main categories, each of which in turn included three subthemes. First, in the category of Learning, we identified how individuals used their Experiences with MRSA, to answer the questions: What was learned? and, How did learning occur? The second category, Adaptation gave us insights into Self-reliance, Reliance on others, and Reflections on the MRSA journey. Conclusions This study underscores the critical importance of educational programs for patients, and improved continuing education for healthcare providers. Five specific results of this study can reduce the vacuum that currently exists between the knowledge and information available to healthcare professionals, and how that information is conveyed to the public. These points include: 1 a common model of MRSA learning and adaptation; 2 the self-directed nature of adult learning; 3 the focus on general MRSA information, care and
International Nuclear Information System (INIS)
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-01-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper
International Nuclear Information System (INIS)
Penuela, G; Ordonez R, A; Bejarano, A
1998-01-01
A generalized material balance equation was presented at the Escuela de Petroleos de la Universidad Industrial de Santander for coal seam gas reservoirs based on Walsh's method, who worked in an analogous approach for oil and gas conventional reservoirs (Walsh, 1995). Our equation was based on twelve similar assumptions itemized by Walsh for his generalized expression for conventional reservoirs it was started from the same volume balance consideration and was finally reorganized like Walsh (1994) did. Because it is not expressed in terms of traditional (P/Z) plots, as proposed by King (1990), it allows to perform a lot of quantitative and qualitative analyses. It was also demonstrated that the existent equations are only particular cases of the generalized expression evaluated under certain restrictions. This equation is applicable to coal seam gas reservoirs in saturated, equilibrium and under saturated conditions, and to any type of coal beds without restriction on especial values of the constant diffusion
Nemeth, Michael P.; Schultz, Marc R.
2012-01-01
A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.
Examining secular trend and seasonality in count data using dynamic generalized linear modelling
DEFF Research Database (Denmark)
Lundbye-Christensen, Søren; Dethlefsen, Claus; Gorst-Rasmussen, Anders
series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. Data In the period January 1980 to 1999, 17,989 incidents of acute myocardial infarction were recorded in the county of Northern Jutland, Denmark. Records were......Aims Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation and overdispersion. Methods The model resembles an ordinary time...... updated daily. Results The model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicate a good model fit. The analysis with the dynamic model revealed peaks coinciding with influenza epidemics. On average the peak-to-trough ratio is estimated...
General rigid motion correction for computed tomography imaging based on locally linear embedding
Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge
2018-02-01
The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.
Iterative solution of general sparse linear systems on clusters of workstations
Energy Technology Data Exchange (ETDEWEB)
Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
International Nuclear Information System (INIS)
Kagramanov, E.D.; Nagiyev, Sh.M.; Mir-Kasimov, R.M.
1989-03-01
An exactly soluble problem for the finite-difference Schroedinger equation in the relativistic configurational space is considered. The appropriate finite-difference generalization of the factorization method is developed. The theory of new special functions ''the relativistic Hermite polynomials'', in which the solutions are expressed, is constructed. (author). 14 refs
Equilibrium arrival times to queues with general service times and non-linear utility functions
DEFF Research Database (Denmark)
Breinbjerg, Jesper
2017-01-01
by a general utility function which is decreasing in the waiting time and service completion time of each customer. Applications of such queueing games range from people choosing when to arrive at a grand opening sale to travellers choosing when to line up at the gate when boarding an airplane. We develop...
The energy and the linear momentum of space-times in general relativity
International Nuclear Information System (INIS)
Schoen, R.; Yau, S.T.
1981-01-01
We extend our previous proof of the positive mass conjecture to allow a more general asymptotic condition proposed by York. Hence we are able to prove that for an isolated physical system, the energy momentum four vector is a future timelike vector unless the system is trivial. Furthermore, we allow singularities of the type of black holes. (orig.)
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
Directory of Open Access Journals (Sweden)
Yang Jian
2007-01-01
Full Text Available We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Generalized W1;1-Young Measures and Relaxation of Problems with Linear Growth
Czech Academy of Sciences Publication Activity Database
Baia, M.; Krömer, Stefan; Kružík, Martin
2018-01-01
Roč. 50, č. 1 (2018), s. 1076-1119 ISSN 0036-1410 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GF16-34894L Institutional support: RVO:67985556 Keywords : lower semicontinuity * quasiconvexity * Young measures Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.648, year: 2016 http://library.utia.cas.cz/2018/MTR/kruzik-0487019.pdf
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Directory of Open Access Journals (Sweden)
Dongyul Lee
2014-01-01
Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
International Nuclear Information System (INIS)
Chiou, J-S; Liu, M-T
2008-01-01
As a powerful machine-learning approach to pattern recognition problems, the support vector machine (SVM) is known to easily allow generalization. More importantly, it works very well in a high-dimensional feature space. This paper presents a nonlinear active suspension controller which achieves a high level performance by compensating for actuator dynamics. We use a linear quadratic regulator (LQR) to ensure optimal control of nonlinear systems. An LQR is used to solve the problem of state feedback and an SVM is used to address the question of the estimation and examination of the state. These two are then combined and designed in a way that outputs feedback control. The real-time simulation demonstrates that an active suspension using the combined SVM-LQR controller provides passengers with a much more comfortable ride and better road handling
General, database-driven fast-feedback system for the Stanford Linear Collider
International Nuclear Information System (INIS)
Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.; Shoaee, H.
1991-05-01
A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs
International Nuclear Information System (INIS)
Khattab, K.M.
1998-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotopic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) methods is proposed. This method converges (Clock time) faster than the MDSA method. It is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented. (author). 9 refs., 2 tabs., 5 figs
International Nuclear Information System (INIS)
Khattab, K.M.
1997-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented
Szadkowski, Zbigniew; Fraenkel, E. D.; van den Berg, Ad M.
2013-01-01
We present the FPGA/NIOS implementation of an adaptive finite impulse response (FIR) filter based on linear prediction to suppress radio frequency interference (RFI). This technique will be used for experiments that observe coherent radio emission from extensive air showers induced by
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
The general linear thermoelastic end problem for solid and hollow cylinders
International Nuclear Information System (INIS)
Thompson, J.J.; Chen, P.Y.P.
1977-01-01
This paper reports on three topics arising from work in progress on theoretical and computational aspects of the utilization of self equilibrating and load stress systems, to solve thermoelastic problems of finite, or semi-infinite, solid or hollow circular cylinders, with particular reference to the pellets, rods, tubes and shells with arbitrary internal heat generation encountered in Nuclear Reactor Technology. Specifically the work is aimed at the evaluation of stress intensification factors in the end elastic boundary layer region, due to various thermal and mechanical end load conditions, in relation to the external, exact stress solutions, which satisfy conditions on the curved surfaces only and are valid over the remainder of the cylindrical body. More generally, it is possible, at least for symmetric thermoelastic problems, to derive exact external solutions, using self equilibrating end load systems, which describe the stress/displacement state completely as a combination of a simple local plane strain solution and a correction dependent on the magnitude of axial thermal gradients. Thus plane strain, and self equilibrating end load systems are sufficient for the complete external and boundary layer solution of a finite cylindrical body. This formulation is capable of further extension, e.g., to concentric multi-region problems, and provides a useful approach to the study of local stress intensification factors due to thermal perturbations
International Nuclear Information System (INIS)
Rath, J.; Freeman, A.J.
1975-01-01
A detailed study of the generalized susceptibility chi(vector q) of Sc metal determined from an accurate augmented-plane-wave method calculation of its energy-band structure is presented. The calculations were done by means of a computational scheme for chi(vector q) derived as an extension of the work of Jepsen and Andersen and Lehmann and Taut on the density-of-states problem. The procedure yields simple analytic expressions for the chi(vector q) integral inside a tetrahedral microzone of the Brillouin zone which depends only on the volume of the tetrahedron and the differences of the energies at its corners. Constant-matrix-element results have been obtained for Sc which show very good agreement with the results of Liu, Gupta, and Sinha (but with one less peak) and exhibit a first maximum in chi(vector q) at (0, 0, 0.31) 2π/c [vs (0, 0, 0.35) 2π/c obtained by Liu et al.] which relates very well to dilute rare-earth alloy magnetic ordering at vector q/sub m/ = (0, 0, 0.28) 2π/c and to the kink in the LA-phonon dispersion curve at (0, 0, 0.27) 2π/c. (U.S.)
Appukuttan, D P; Vinayagavel, M; Balasundaram, A; Damodaran, L K; Shivaraman, P; Gunasshegaran, K
2015-01-01
Oral health has an impact on quality of life hence for research purpose validation of a Tamil version of General Oral Health Assessment Index would enable it to be used as a valuable tool among Tamil speaking population. In this study, we aimed to assess the psychometric properties of translated Tamil version of General Oral Health Assessment Index (GOHAI-Tml). Linguistic adaptation involved forward and backward blind translation process. Reliability was analyzed using test-retest, Cronbach alpha, and split half reliability. Inter-item and item-total correlation were evaluated using Spearman rank correlation. Convenience sampling was done, and 265 consecutive patients aged 20-70 years attending the outpatient department were recruited. Subjects were requested to fill a self-reporting questionnaire along with Tamil GOHAI version. Clinical examination was done on the same visit. Concurrent validity was measured by assessing the relationship between GOHAI scores and self-perceived oral health and general health status, satisfaction with oral health, need for dental treatment and esthetic satisfaction. Discriminant validity was evaluated by comparing the GOHAI scores with the objectively assessed clinical parameters. Exploratory factor analysis was done to examine the factor structure. Mean GOHAI-Tml was 52.7 (6.8, range 22-60, median 54). The mean number of negative impacts was 2 (2.4, range 0-11, median 1). The Spearman rank correlation for test-retest ranged from 0.8 to 0.9 (P Tamil speaking population.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Energy Technology Data Exchange (ETDEWEB)
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei
2014-01-01
The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.
Directory of Open Access Journals (Sweden)
Anas Altaleb
2017-03-01
Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
International Development Research Centre (IDRC) Digital Library (Canada)
building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.
Milquez-Sanabria, Harvey; Blanco-Cocom, Luis; Alzate-Gaviria, Liliana
2016-10-03
Agro-industrial wastes are an energy source for different industries. However, its application has not reached small industries. Previous and current research activities performed on the acidogenic phase of two-phase anaerobic digestion processes deal particularly with process optimization of the acid-phase reactors operating with a wide variety of substrates, both soluble and complex in nature. Mathematical models for anaerobic digestion have been developed to understand and improve the efficient operation of the process. At present, lineal models with the advantages of requiring less data, predicting future behavior and updating when a new set of data becomes available have been developed. The aim of this research was to contribute to the reduction of organic solid waste, generate biogas and develop a simple but accurate mathematical model to predict the behavior of the UASB reactor. The system was maintained separate for 14 days during which hydrolytic and acetogenic bacteria broke down onion waste, produced and accumulated volatile fatty acids. On this day, two reactors were coupled and the system continued for 16 days more. The biogas and methane yields and volatile solid reduction were 0.6 ± 0.05 m 3 (kg VS removed ) -1 , 0.43 ± 0.06 m 3 (kg VS removed ) -1 and 83.5 ± 9.8 %, respectively. The model application showed a good prediction of all process parameters defined; maximum error between experimental and predicted value was 1.84 % for alkalinity profile. A linear predictive adaptive model for anaerobic digestion of onion waste in a two-stage process was determined under batch-fed condition. Organic load rate (OLR) was maintained constant for the entire operation, modifying effluent hydrolysis reactor feed to UASB reactor. This condition avoids intoxication of UASB reactor and also limits external buffer addition.
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Cross-Cultural adaptation of the General Functioning Scale of the Family
Directory of Open Access Journals (Sweden)
Thiago Pires
2016-01-01
Full Text Available ABSTRACT OBJECTIVE To describe the process of cross-cultural adaptation of the General Functioning Scale of the Family, a subscale of the McMaster Family Assessment Device, for the Brazilian population. METHODS The General Functioning Scale of the Family was translated into Portuguese and administered to 500 guardians of children in the second grade of elementary school in public schools of Sao Gonçalo, Rio de Janeiro, Southeastern Brazil. The types of equivalences investigated were: conceptual and of items, semantic, operational, and measurement. The study involved discussions with experts, translations and back-translations of the instrument, and psychometric assessment. Reliability and validity studies were carried out by internal consistency testing (Cronbach’s alpha, Guttman split-half correlation model, Pearson correlation coefficient, and confirmatory factor analysis. Associations between General Functioning of the Family and variables theoretically associated with the theme (father’s or mother’s drunkenness and violence between parents were estimated by odds ratio. RESULTS Semantic equivalence was between 90.0% and 100%. Cronbach’s alpha ranged from 0.79 to 0.81, indicating good internal consistency of the instrument. Pearson correlation coefficient ranged between 0.303 and 0.549. Statistical association was found between the general functioning of the family score and the theoretically related variables, as well as good fit quality of the confirmatory analysis model. CONCLUSIONS The results indicate the feasibility of administering the instrument to the Brazilian population, as it is easy to understand and a good measurement of the construct of interest.
Hanglberger, Dominik; Merz, Joachim
2015-01-01
Empirical analyses using cross-sectional and panel data found significantly higher levels of job satisfaction for the self-employed than for employees. We argue that by neglecting anticipation and adaptation effects estimates in previous studies might be misleading. To test this, we specify models accounting for anticipation and adaptation to self-employment and general job changes. In contrast to recent literature we find no specific long-term effect of self-employment on job satisfaction. A...
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Directory of Open Access Journals (Sweden)
Xiuchun Li
2013-01-01
Full Text Available When the parameters of both drive and response systems are all unknown, an adaptive sliding mode controller, strongly robust to exotic perturbations, is designed for realizing generalized function projective synchronization. Sliding mode surface is given and the controlled system is asymptotically stable on this surface with the passage of time. Based on the adaptation laws and Lyapunov stability theory, an adaptive sliding controller is designed to ensure the occurrence of the sliding motion. Finally, numerical simulations are presented to verify the effectiveness and robustness of the proposed method even when both drive and response systems are perturbed with external disturbances.
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
International Nuclear Information System (INIS)
Fiorenza, Alberto; Vincenzi, Giovanni
2011-01-01
Research highlights: → We prove a result true for all linear homogeneous recurrences with constant coefficients. → As a corollary of our results we immediately get the celebrated Poincare' theorem. → The limit of the ratio of adjacent terms is characterized as the unique leading root of the characteristic polynomial. → The Golden Ratio, Kepler limit of the classical Fibonacci sequence, is the unique leading root. → The Kepler limit may differ from the unique root of maximum modulus and multiplicity. - Abstract: For complex linear homogeneous recursive sequences with constant coefficients we find a necessary and sufficient condition for the existence of the limit of the ratio of consecutive terms. The result can be applied even if the characteristic polynomial has not necessarily roots with modulus pairwise distinct, as in the celebrated Poincare's theorem. In case of existence, we characterize the limit as a particular root of the characteristic polynomial, which depends on the initial conditions and that is not necessarily the unique root with maximum modulus and multiplicity. The result extends to a quite general context the way used to find the Golden mean as limit of ratio of consecutive terms of the classical Fibonacci sequence.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Shimizu, Yoshiaki
1991-01-01
In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)
International Nuclear Information System (INIS)
Dai Hao; Jia Li-Xin; Zhang Yan-Bin
2012-01-01
The adaptive generalized matrix projective lag synchronization between two different complex networks with non-identical nodes and different dimensions is investigated in this paper. Based on Lyapunov stability theory and Barbalat's lemma, generalized matrix projective lag synchronization criteria are derived by using the adaptive control method. Furthermore, each network can be undirected or directed, connected or disconnected, and nodes in either network may have identical or different dynamics. The proposed strategy is applicable to almost all kinds of complex networks. In addition, numerical simulation results are presented to illustrate the effectiveness of this method, showing that the synchronization speed is sensitively influenced by the adaptive law strength, the network size, and the network topological structure. (general)
Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W
2018-03-05
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Using a Split-belt Treadmill to Evaluate Generalization of Human Locomotor Adaptation.
Vasudevan, Erin V L; Hamzey, Rami J; Kirk, Eileen M
2017-08-23
Understanding the mechanisms underlying locomotor learning helps researchers and clinicians optimize gait retraining as part of motor rehabilitation. However, studying human locomotor learning can be challenging. During infancy and childhood, the neuromuscular system is quite immature, and it is unlikely that locomotor learning during early stages of development is governed by the same mechanisms as in adulthood. By the time humans reach maturity, they are so proficient at walking that it is difficult to come up with a sufficiently novel task to study de novo locomotor learning. The split-belt treadmill, which has two belts that can drive each leg at a different speed, enables the study of both short- (i.e., immediate) and long-term (i.e., over minutes-days; a form of motor learning) gait modifications in response to a novel change in the walking environment. Individuals can easily be screened for previous exposure to the split-belt treadmill, thus ensuring that all experimental participants have no (or equivalent) prior experience. This paper describes a typical split-belt treadmill adaptation protocol that incorporates testing methods to quantify locomotor learning and generalization of this learning to other walking contexts. A discussion of important considerations for designing split-belt treadmill experiments follows, including factors like treadmill belt speeds, rest breaks, and distractors. Additionally, potential but understudied confounding variables (e.g., arm movements, prior experience) are considered in the discussion.
Directory of Open Access Journals (Sweden)
Al-Jumah KA
2014-03-01
Full Text Available Khalaf Ali Al-Jumah,1 Mohamed Azmi Hassali,2 Ibrahem Al-Zaagi31Al Amal Psychiatric Hospital, Riyadh, Saudi Arabia; 2School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia; 3King Saud Medical City, Riyadh, Saudi ArabiaObjective: The aim of this study was to cross-culturally adapt the Armando Patient Satisfaction Questionnaire into Arabic and validate its use in the general population.Methods: The translation was conducted based on the principles of the most widely used model in questionnaire translation, namely Brisling’s back-translation model. A written authorization allowing translation into Arabic was obtained from the original author. The Arabic version of the questionnaire was distributed to 480 participants to evaluate construct validity. Statistical Package for Social Sciences version 17.0 for Windows was used for the statistical analysis.Results: The response rate of this study was 96%; most of the respondents (52.5% were female. Internal consistency was assessed using Cronbach’s α, which showed that this questionnaire provides a high reliability coefficient (reaching 0.9299 and a high degree of consistency and thus can be relied upon in future patient satisfaction research.Keywords: cross-cultural, Arabic, survey
How applicable is the general adaptation syndrome to the unicellular Tetrahymena?
Csaba, György; Pállinger, Eva
2009-01-01
Hormone receptors, hormones and signal transduction pathways characteristic of higher vertebrates can be observed also in the unicellular Tetrahymena. Previous work showed that stress conditions (starvation, high temperature, high salt concentration, formaldehyde or alcohol treatment) elevated the intracellular level of four hormones (ACTH, endorphin, serotonin and T(3)). Here, the effect of other stressors (CuSO4 poisoning, tryptophan hydroxylase inhibitor parachlorophenylalanine (PCPA) treatment) on the same and other hormones (epinephrine, insulin, histamine) was studied, using immunocytochemistry and flow cytometric analysis. It was found, that each effect increased the intracellular hormone contents, but some hormones (histamine, T(3)) were less reactive. Insulin--which is a life-saving factor for Tetrahymena--itself provoked elevation of hormone amounts in association with a stressor, further increased the level of hormones. It was concluded that the ancestor of Selye's General Adaptation Syndrome (GAS) can be found already at unicellular level, and this possibly has a life saving function. Copyright 2008 John Wiley & Sons, Ltd.
Fink, George
2017-03-01
Hans Selye in a note to Nature in 1936 initiated the field of stress research by showing that rats exposed to nocuous stimuli responded by way of a 'general adaptation syndrome' (GAS). One of the main features of the GAS was the 'formation of acute erosions in the digestive tract, particularly in the stomach, small intestine and appendix'. This provided experimental evidence for the view based on clinical data that gastro-duodenal (peptic) ulcers could be caused by stress. This hypothesis was challenged by Marshall and Warren's Nobel Prize (2005)-winning discovery of a causal association between Helicobacter pylori and peptic ulcers. However, clinical and experimental studies suggest that stress can cause peptic ulceration in the absence of H. pylori Predictably, the etiological pendulum of gastric and duodenal ulceration has swung from 'all stress' to 'all bacteria' followed by a sober realization that both factors play a role, separately as well as together. This raises the question as to whether stress and H. pylori interact, and if so, how? Stress has also been implicated in inflammatory bowel disease (IBD) and related disorders; however, there is no proof yet that stress is the primary etiological trigger for IBD. Central dopamine mechanisms seem to be involved in the stress induction of peptic ulceration, whereas activation of the sympathetic nervous system and central and peripheral corticotrophin-releasing factor appears to mediate stress-induced IBD. © 2017 Society for Endocrinology.
Directory of Open Access Journals (Sweden)
Jalalifar Mehran
2007-01-01
Full Text Available In this paper using adaptive backstepping approach an adaptive rotor flux observer which provides stator and rotor resistances estimation simultaneously for induction motor used in series hybrid electric vehicle is proposed. The controller of induction motor (IM is designed based on input-output feedback linearization technique. Combining this controller with adaptive backstepping observer the system is robust against rotor and stator resistances uncertainties. In additional, mechanical components of a hybrid electric vehicle are called from the Advanced Vehicle Simulator Software Library and then linked with the electric motor. Finally, a typical series hybrid electric vehicle is modeled and investigated. Various tests, such as acceleration traversing ramp, and fuel consumption and emission are performed on the proposed model of a series hybrid vehicle. Computer simulation results obtained, confirm the validity and performance of the proposed IM control approach using for series hybrid electric vehicle.
Lundström, T; Jonas, T; Volkwein, A
2008-01-01
Thirteen Norway spruce [Picea abies (L.) Karst.] trees of different size, age, and social status, and grown under varying conditions, were investigated to see how they react to complex natural static loading under summer and winter conditions, and how they have adapted their growth to such combinations of load and tree state. For this purpose a non-linear finite-element model and an extensive experimental data set were used, as well as a new formulation describing the degree to which the exploitation of the bending stress capacity is uniform. The three main findings were: material and geometric non-linearities play important roles when analysing tree deflections and critical loads; the strengths of the stem and the anchorage mutually adapt to the local wind acting on the tree crown in the forest canopy; and the radial stem growth follows a mechanically high-performance path because it adapts to prevailing as well as acute seasonal combinations of the tree state (e.g. frozen or unfrozen stem and anchorage) and load (e.g. wind and vertical and lateral snow pressure). Young trees appeared to adapt to such combinations in a more differentiated way than older trees. In conclusion, the mechanical performance of the Norway spruce studied was mostly very high, indicating that their overall growth had been clearly influenced by the external site- and tree-specific mechanical stress.
Directory of Open Access Journals (Sweden)
Jie Wang
2017-03-01
Full Text Available Deep convolutional neural networks (CNNs have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general features from pre-trained deep CNNs for remote scene classification. These two architectures suggest two directions for improvement. First, before the pre-trained deep CNNs, we design a linear PCA network (LPCANet to synthesize spatial information of remote sensing images in each spectral channel. This design shortens the spatial “distance” of target and source datasets for pre-trained deep CNNs. Second, we introduce quaternion algebra to LPCANet, which further shortens the spectral “distance” between remote sensing images and images used to pre-train deep CNNs. With five well-known pre-trained deep CNNs, experimental results on three independent remote sensing datasets demonstrate that our proposed framework obtains state-of-the-art results without fine-tuning and feature fusing. This paper also provides baseline for transferring fresh pretrained deep CNNs to other remote sensing tasks.
Energy Technology Data Exchange (ETDEWEB)
Treuer, Harald; Hoevels, Moritz; Luyken, Klaus; Visser-Vandewalle, Veerle; Wirths, Jochen; Ruge, Maximilian [University Hospital Cologne, Department of Stereotaxy and Functional Neurosurgery, Cologne (Germany); Kocher, Martin [University Hospital Cologne, Department of Radiotherapy, Cologne (Germany)
2014-11-22
Stereotactic radiosurgery with an adapted linear accelerator (linac-SRS) is an established therapy option for brain metastases, benign brain tumors, and arteriovenous malformations. We intended to investigate whether the dosimetric quality of treatment plans achieved with a CyberKnife (CK) is at least equivalent to that for linac-SRS with circular or micromultileaf collimators (microMLC). A random sample of 16 patients with 23 target volumes, previously treated with linac-SRS, was replanned with CK. Planning constraints were identical dose prescription and clinical applicability. In all cases uniform optimization scripts and inverse planning objectives were used. Plans were compared with respect to coverage, minimal dose within target volume, conformity index, and volume of brain tissue irradiated with ≥ 10 Gy. Generating the CK plan was unproblematic with simple optimization scripts in all cases. With the CK plans, coverage, minimal target volume dosage, and conformity index were significantly better, while no significant improvement could be shown regarding the 10 Gy volume. Multiobjective comparison for the irradiated target volumes was superior in the CK plan in 20 out of 23 cases and equivalent in 3 out of 23 cases. Multiobjective comparison for the treated patients was superior in the CK plan in all 16 cases. The results clearly demonstrate the superiority of the irradiation plan for CK compared to classical linac-SRS with circular collimators and microMLC. In particular, the average minimal target volume dose per patient, increased by 1.9 Gy, and at the same time a 14 % better conformation index seems to be an improvement with clinical relevance. (orig.) [German] Stereotaktische Radiochirurgie mit einem adaptierten Linearbeschleuniger (Linac-SRS) ist eine erfolgreiche und etablierte Therapieoption fuer Hirnmetastasen, benigne Hirntumoren und arteriovenoese Malformationen. Ziel war es, zu untersuchen, ob die mit einem CyberKnife (CK) erreichbare
Radaydeh, Redha Mahmoud Mesleh; Alouini, Mohamed-Slim
2010-01-01
The impact of co-channel interference and nonideal estimation of the desired user channel state information (CSI) on the performance of an adaptive threshold-based generalized transmit diversity for low-complexity multiple-input single-output configuration is investigated. The adaptation to channel conditions is assumed to be based on the desired user CSI, and the number of active transmit antennas is adjusted accordingly to guarantee predetermined target performance. To facilitate comparisons between different adaptation schemes, new analytical results for the statistics of combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. Selected numerical results are presented to validate the analytical development and to compare the outage performance of the considered adaptation schemes. © 2010 IEEE.
Radaydeh, Redha Mahmoud Mesleh
2010-09-01
The impact of co-channel interference and nonideal estimation of the desired user channel state information (CSI) on the performance of an adaptive threshold-based generalized transmit diversity for low-complexity multiple-input single-output configuration is investigated. The adaptation to channel conditions is assumed to be based on the desired user CSI, and the number of active transmit antennas is adjusted accordingly to guarantee predetermined target performance. To facilitate comparisons between different adaptation schemes, new analytical results for the statistics of combined signal-to-interference-plus-noise ratio (SINR) are derived, which can be applied for different fading conditions of interfering signals. Selected numerical results are presented to validate the analytical development and to compare the outage performance of the considered adaptation schemes. © 2010 IEEE.
Hughes, Vanessa K; Langlois, Neil E I
2010-12-01
Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This
Cross-Cultural adaptation of the General Functioning Scale of the Family.
Pires, Thiago; Assis, Simone Gonçalves de; Avanci, Joviana Quintes; Pesce, Renata Pires
2016-06-27
To describe the process of cross-cultural adaptation of the General Functioning Scale of the Family, a subscale of the McMaster Family Assessment Device, for the Brazilian population. The General Functioning Scale of the Family was translated into Portuguese and administered to 500 guardians of children in the second grade of elementary school in public schools of Sao Gonçalo, Rio de Janeiro, Southeastern Brazil. The types of equivalences investigated were: conceptual and of items, semantic, operational, and measurement. The study involved discussions with experts, translations and back-translations of the instrument, and psychometric assessment. Reliability and validity studies were carried out by internal consistency testing (Cronbach's alpha), Guttman split-half correlation model, Pearson correlation coefficient, and confirmatory factor analysis. Associations between General Functioning of the Family and variables theoretically associated with the theme (father's or mother's drunkenness and violence between parents) were estimated by odds ratio. Semantic equivalence was between 90.0% and 100%. Cronbach's alpha ranged from 0.79 to 0.81, indicating good internal consistency of the instrument. Pearson correlation coefficient ranged between 0.303 and 0.549. Statistical association was found between the general functioning of the family score and the theoretically related variables, as well as good fit quality of the confirmatory analysis model. The results indicate the feasibility of administering the instrument to the Brazilian population, as it is easy to understand and a good measurement of the construct of interest. Descrever o processo de adaptação transcultural da escala de Funcionamento Geral da Família, subescala da McMaster Family Assessment Device, para a população brasileira. A escala de Funcionamento Geral da Família, original no idioma inglês, foi traduzida para o português e aplicada a 500 responsáveis de crianças do segundo ano do ensino
Directory of Open Access Journals (Sweden)
Maryam Montazeri
2013-01-01
Full Text Available This paper presents a control approach to the fuzzy-adaptive control scheme for rigid manipulators with unknown parameters. Lagrange’s method is employed for computing robot motion dynamics. Stability analysis guaranteed through Lyapunov’s theory using some suitable adaptive rules that make sure all signals in the closed-loop system are bounded and tracking error ones asymptotically reaches to zero. Compared with other controllers, there are some numerical simulations that verify effectiveness of the proposed method. Also, simulation results verify that the proposed controller can deal with uncertainties in the system.
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models
Directory of Open Access Journals (Sweden)
Xiao Guo
2018-03-01
Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Gur'ianov, V A; Shepetovskaia, N L; Pivovarova, G M; Tolmachev, G N; Volodin, A V
2007-01-01
By taking into account the fact that the autonomic nervous and cardiovascular systems (ANS and CVS) are the major links of development of the general adaptation syndrome in pregnancy, which are affected by all the processes involved in the development of the syndrome, the author analyzed the state of these systems in healthy non-pregnant and pregnant women (HNPW and HPW) and in pregnant women with gestosis. HNPW were found to have already a prerequisite for impairing pregnancy adaptive processes as ANS and CVS dysfunction. In HPW, these impairments were more pronounced. In the pregnant women, impaired adaptive processes manifested themselves as excess sympathicotonia in 72% and parasympathicotonia in 23% of cases despite the treatment performed, which was accompanied by hypokinetic hemodynamics in 53 and 50%, respectively. In hyper- and eukinetic hemodynamics, there were no physiologically required decreases in total peripheral vascular resistance while in hypokinetic hemodynamics, there was its pathological increase. Such disorders enhance the significance of abdominal compartment syndrome, aortocaval compression, ischemia-reperfusion, hydrodynamic and membranogenic (capillary leakage) factors of impaired water metabolism, which contributes to adaptation derangement. Based on the findings, the authors have created a developmental modulation algorithm for the general adaptation syndrome by completed pregnancy and surgical delivery.
Watanabe, Yurina; Yoshizaki, Kazuhito
2014-10-01
This study aimed to investigate the generality of conflict adaptation associated with block-wise conflict frequency between two types of stimulus scripts (Kanji and Hiragana). To this end, we examined whether the modulation of the compatibility effect with one type of script depending on block-wise conflict frequency (75% versus 25% generalized to the other type of script whose block-wise conflict frequency was kept constant (50%), using the Spatial Stroop task. In Experiment 1, 16 participants were required to identify the target orientation (up or down) presented in the upper or lower visual-field. The results showed that block-wise conflict adaptation with one type of stimulus script generalized to the other. The procedure in Experiment 2 was the same as that in Experiment 1, except that the presentation location differed between the two types of stimulus scripts. We did not find a generalization from one script to the other. These results suggest that presentation location is a critical factor contributing to the generality of block-wise conflict adaptation.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Orra, Kashfull; Choudhury, Sounak K.
2016-12-01
The purpose of this paper is to build an adaptive feedback linear control system to check the variation of cutting force signal to improve the tool life. The paper discusses the use of transfer function approach in improving the mathematical modelling and adaptively controlling the process dynamics of the turning operation. The experimental results shows to be in agreement with the simulation model and error obtained is less than 3%. The state space approach model used in this paper successfully check the adequacy of the control system through controllability and observability test matrix and can be transferred from one state to another by appropriate input control in a finite time. The proposed system can be implemented to other machining process under varying range of cutting conditions to improve the efficiency and observability of the system.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2017-01-01
In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
Camilo, Daniela Castro
2017-08-30
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l
2017-01-01
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P
2014-08-01
Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.
Dorren, H.J.S.
1998-01-01
It is shown that the Korteweg–de Vries (KdV) equation can be transformed into an ordinary linear partial differential equation in the wave number domain. Explicit solutions of the KdV equation can be obtained by subsequently solving this linear differential equation and by applying a cascade of
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Directory of Open Access Journals (Sweden)
Kyle A McQuisten
2009-10-01
Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
Directory of Open Access Journals (Sweden)
Reza Ezzati
2014-08-01
Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.
Lundahl, P. Johan; Kitts, Catherine C.; Nordé n, Bengt
2011-01-01
This article presents a new design of flow-orientation device for the study of bio-macromolecules, including DNA and protein complexes, as well as aggregates such as amyloid fibrils and liposome membranes, using Linear Dichroism (LD) spectroscopy. The design provides a number of technical advantages that should make the device inexpensive to manufacture, easier to use and more reliable than existing techniques. The degree of orientation achieved is of the same order of magnitude as that of the commonly used concentric cylinders Couette flow cell, however, since the device exploits a set of flat strain-free quartz plates, a number of problems associated with refraction and birefringence of light are eliminated, increasing the sensitivity and accuracy of measurement. The device provides similar shear rates to those of the Couette cell but is superior in that the shear rate is constant across the gap. Other major advantages of the design is the possibility to change parts and vary sample volume and path length easily and at a low cost. © 2011 The Royal Society of Chemistry.
1981-03-30
Weimar period, but the Prussian general staff would never again control the military destinies of the German people. Before its defeat and dissolution...Publishers (New York: International Publishers, 1925), p. 4. 90. Norman Stone, The Eastern Front 1914-1917 (London: Hodder and Stoughton, 1975), p. 61...Lieutenant (later Brigadier General) Arthur L. Wagner was assigned to the school as an instructor in the Department of Military Art. Wagner , a 1875
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Generalized Net Model of the Cognitive and Neural Algorithm for Adaptive Resonance Theory 1
Directory of Open Access Journals (Sweden)
Todor Petkov
2013-12-01
Full Text Available The artificial neural networks are inspired by biological properties of human and animal brains. One of the neural networks type is called ART [4]. The abbreviation of ART stands for Adaptive Resonance Theory that has been invented by Stephen Grossberg in 1976 [5]. ART represents a family of Neural Networks. It is a cognitive and neural theory that describes how the brain autonomously learns to categorize, recognize and predict objects and events in the changing world. In this paper we introduce a GN model that represent ART1 Neural Network learning algorithm [1]. The purpose of this model is to explain when the input vector will be clustered or rejected among all nodes by the network. It can also be used for explanation and optimization of ART1 learning algorithm.
Hals, Ingrid K; Bruerberg, Simon Gustafson; Ma, Zuheng; Scholz, Hanne; Björklund, Anneli; Grill, Valdemar
2015-01-01
To provide novel insights on mitochondrial respiration in β-cells and the adaptive effects of hypoxia. Insulin-producing INS-1 832/13 cells were exposed to 18 hours of hypoxia followed by 20-22 hours re-oxygenation. Mitochondrial respiration was measured by high-resolution respirometry in both intact and permeabilized cells, in the latter after establishing three functional substrate-uncoupler-inhibitor titration (SUIT) protocols. Concomitant measurements included proteins of mitochondrial complexes (Western blotting), ATP and insulin secretion. Intact cells exhibited a high degree of intrinsic uncoupling, comprising about 50% of oxygen consumption in the basal respiratory state. Hypoxia followed by re-oxygenation increased maximal overall respiration. Exploratory experiments in peremabilized cells could not show induction of respiration by malate or pyruvate as reducing substrates, thus glutamate and succinate were used as mitochondrial substrates in SUIT protocols. Permeabilized cells displayed a high capacity for oxidative phosphorylation for both complex I- and II-linked substrates in relation to maximum capacity of electron transfer. Previous hypoxia decreased phosphorylation control of complex I-linked respiration, but not in complex II-linked respiration. Coupling control ratios showed increased coupling efficiency for both complex I- and II-linked substrates in hypoxia-exposed cells. Respiratory rates overall were increased. Also previous hypoxia increased proteins of mitochondrial complexes I and II (Western blotting) in INS-1 cells as well as in rat and human islets. Mitochondrial effects were accompanied by unchanged levels of ATP, increased basal and preserved glucose-induced insulin secretion. Exposure of INS-1 832/13 cells to hypoxia, followed by a re-oxygenation period increases substrate-stimulated respiratory capacity and coupling efficiency. Such effects are accompanied by up-regulation of mitochondrial complexes also in pancreatic islets
Auer, K; Carson, D
2010-01-01
Retention of GPs in the more remote parts of Australia remains an important issue in workforce planning. The Northern Territory of Australia experiences very high rates of staff turnover. This research examined how the process of forming 'place attachment' between GP and practice location might influence prospects for retention. It examines whether GPs use 'adjustment' (short term trade-offs between work and lifestyle ambitions) or 'adaptation' (attempts to change themselves and their environment to fulfil lifestyle ambitions) strategies to cope with the move to new locations. 19 semi-structured interviews were conducted mostly with GPs who had been in the Northern Territory for less than 3 years. Participants were asked about the strategies they used in an attempt to establish place attachment. Strategies could be structural (work related), personal, social or environmental. There were strong structural motivators for GPs to move to the Northern Territory. These factors were seen as sufficiently attractive to permit the setting aside of other lifestyle ambitions for a short period of time. Respondents found the environmental aspects of life in remote areas to be the most satisfying outside work. Social networks were temporary and the need to re-establish previous networks was the primary driver of out migration. GPs primarily use adjustment strategies to temporarily secure their position within their practice community. There were few examples of adaptation strategies that would facilitate a longer term match between the GPs' overall life ambitions and the characteristics of the community. While this suggests that lengths of stay will continue to be short, better adjustment skills might increase the potential for repeat service and limit the volume of unplanned early exits.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Diffractive generalized phase contrast for adaptive phase imaging and optical security
DEFF Research Database (Denmark)
Palima, Darwin; Glückstad, Jesper
2012-01-01
We analyze the properties of Generalized Phase Contrast (GPC) when the input phase modulation is implemented using diffractive gratings. In GPC applications for patterned illumination, the use of a dynamic diffractive optical element for encoding the GPC input phase allows for onthe- fly optimiza...... security applications and can be used to create phasebased information channels for enhanced information security....
Directory of Open Access Journals (Sweden)
Eusebio Eduardo Hernández Martinez
2013-01-01
Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection...
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Wang, Cheng; Guan, Wei; Wang, J. Y.; Zhong, Bineng; Lai, Xiongming; Chen, Yewang; Xiang, Liang
2018-02-01
To adaptively identify the transient modal parameters for linear weakly damped structures with slow time-varying characteristics under unmeasured stationary random ambient loads, this paper proposes a novel operational modal analysis (OMA) method based on the frozen-in coefficient method and limited memory recursive principal component analysis (LMRPCA). In the modal coordinate, the random vibration response signals of mechanical weakly damped structures can be decomposed into the inner product of modal shapes and modal responses, from which the natural frequencies and damping ratios can be well acquired by single-degree-of-freedom (SDOF) identification approach such as FFT. Hence, for the OMA method based on principal component analysis (PCA), it becomes very crucial to examine the relation between the transformational matrix and the modal shapes matrix, to find the association between the principal components (PCs) matrix and the modal responses matrix, and to turn the operational modal parameter identification problem into PCA of the stationary random vibration response signals of weakly damped mechanical structures. Based on the theory of "time-freezing", the method of frozen-in coefficient, and the assumption of "short time invariant" and "quasistationary", the non-stationary random response signals of the weakly damped and slow linear time-varying structures (LTV) can approximately be seen as the stationary random response time series of weakly damped and linear time invariant structures (LTI) in a short interval. Thus, the adaptive identification of time-varying operational modal parameters is turned into decompositing the PCs of stationary random vibration response signals subsection of weakly damped mechanical structures after choosing an appropriate limited memory window. Finally, a three-degree-of-freedom (DOF) structure with weakly damped and slow time-varying mass is presented to illustrate this method of identification. Results show that the LMRPCA
Langenbucher, Frieder
2005-01-01
A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.
Directory of Open Access Journals (Sweden)
J.D. Quintana
2016-12-01
Full Text Available Over the years, there has been an evolution in the manner in which we perform traditional tasks. Nowadays, almost every simple action that we can think about involves the connection among two or more devices. It is desirable to have a high quality connection among devices, by using electronic or optical signals. Therefore, it is really important to have a reliable connection among terminals in the network. However, the transmission of the signal deteriorates when increasing the distance among devices. There exists a special piece of equipment that we can deploy in a network, called regenerator, which is able to restore the signal transmitted through it, in order to maintain its quality. Deploying a regenerator in a network is generally expensive, so it is important to minimize the number of regenerators used. In this paper we focus on the Generalized Regenerator Location Problem (GRLP, which tries to inÂnd the minimum number of regenerators that must be deployed in a network in order to have a reliable communication without loss of quality. We present a GRASP metaheuristic in order to inÂnd good solutions for the GRLP. The results obtained by the proposal are compared with the best previous methods for this problem. We conduct an extensive computational experience with 60 large and challenging instances, emerging the proposed method as the best performing one. This fact is inÂnally supported by non-parametric statistical tests.
Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2010-03-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.
Directory of Open Access Journals (Sweden)
Somayyeh Lotfi Noghabi
2012-07-01
Full Text Available Introduction: Epilepsy is a clinical syndrome in which seizures have a tendency to recur. Sodium valproate is the most effective drug in the treatment of all types of generalized seizures. Finding the optimal dosage (the lowest effective dose of sodium valproate is a real challenge to all neurologists. In this study, a new approach based on Adaptive Neuro-Fuzzy Inference System (ANFIS was presented for estimating the optimal dosage of sodium valproate in IGE (Idiopathic Generalized Epilepsy patients. Methods: 40 patients with Idiopathic Generalized Epilepsy, who were referred to the neurology department of Mashhad University of Medical Sciences between the years 2006-2011, were included in this study. The function Adaptive Neuro- Fuzzy Inference System (ANFIS constructs a Fuzzy Inference System (FIS whose membership function parameters are tuned (adjusted using either a back-propagation algorithm alone, or in combination with the least squares type of method (hybrid algorithm. In this study, we used hybrid method for adjusting the parameters. Methods: The R-square of the proposed system was %598 and the Pearson correlation coefficient was significant (P 0.05. Although the accuracy of the model was not high, it wasgood enough to be applied for treating the IGE patients with sodium valproate. Discussion: This paper presented a new application of ANFIS for estimating the optimal dosage of sodium valproate in IGE patients. Fuzzy set theory plays an important role in dealing with uncertainty when making decisions in medical applications. Collectively, it seems that ANFIS has a high capacity to be applied in medical sciences, especially neurology.
International Nuclear Information System (INIS)
Steinbrecher, Gyoergy; Weyssow, B.
2004-01-01
The extreme heavy tail and the power-law decay of the turbulent flux correlation observed in hot magnetically confined plasmas are modeled by a system of coupled Langevin equations describing a continuous time linear randomly amplified stochastic process where the amplification factor is driven by a superposition of colored noises which, in a suitable limit, generate a fractional Brownian motion. An exact analytical formula for the power-law tail exponent β is derived. The extremely small value of the heavy tail exponent and the power-law distribution of laminar times also found experimentally are obtained, in a robust manner, for a wide range of input values, as a consequence of the (asymptotic) self-similarity property of the noise spectrum. As a by-product, a new representation of the persistent fractional Brownian motion is obtained
Czech Academy of Sciences Publication Activity Database
Blaheta, Radim
2002-01-01
Roč. 9, 6/7 (2002), s. 525-550 ISSN 1070-5325 Grant - others:INCO Copernicus(XE) KIT977006 Institutional research plan: CEZ:AV0Z3086906 Keywords : elasticity * displacement decomposition Subject RIV: BA - General Mathematics Impact factor: 0.706, year: 2002
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Directory of Open Access Journals (Sweden)
Pérez-Páramo María
2010-01-01
Full Text Available Abstract Background Generalized anxiety disorder (GAD is a prevalent mental health condition which is underestimated worldwide. This study carried out the cultural adaptation into Spanish of the 7-item self-administered GAD-7 scale, which is used to identify probable patients with GAD. Methods The adaptation was performed by an expert panel using a conceptual equivalence process, including forward and backward translations in duplicate. Content validity was assessed by interrater agreement. Criteria validity was explored using ROC curve analysis, and sensitivity, specificity, predictive positive value and negative value for different cut-off values were determined. Concurrent validity was also explored using the HAM-A, HADS, and WHO-DAS-II scales. Results The study sample consisted of 212 subjects (106 patients with GAD with a mean age of 50.38 years (SD = 16.76. Average completion time was 2'30''. No items of the scale were left blank. Floor and ceiling effects were negligible. No patients with GAD had to be assisted to fill in the questionnaire. The scale was shown to be one-dimensional through factor analysis (explained variance = 72%. A cut-off point of 10 showed adequate values of sensitivity (86.8% and specificity (93.4%, with AUC being statistically significant [AUC = 0.957-0.985; p 0.001. Limitations Elderly people, particularly those very old, may need some help to complete the scale. Conclusion After the cultural adaptation process, a Spanish version of the GAD-7 scale was obtained. The validity of its content and the relevance and adequacy of items in the Spanish cultural context were confirmed.
Energy Technology Data Exchange (ETDEWEB)
Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)
2015-12-15
The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)
International Nuclear Information System (INIS)
Farivar, Faezeh; Aliyari Shoorehdeli, Mahdi; Nekoui, Mohammad Ali; Teshnehlab, Mohammad
2012-01-01
Highlights: ► A systematic procedure for GPS of unknown heavy chaotic gyroscope systems. ► Proposed methods are based on Lyapunov stability theory. ► Without calculating Lyapunov exponents and Eigen values of the Jacobian matrix. ► Capable to extend for a variety of chaotic systems. ► Useful for practical applications in the future. - Abstract: This paper proposes the chaos control and the generalized projective synchronization methods for heavy symmetric gyroscope systems via Gaussian radial basis adaptive variable structure control. Because of the nonlinear terms of the gyroscope system, the system exhibits chaotic motions. Occasionally, the extreme sensitivity to initial states in a system operating in chaotic mode can be very destructive to the system because of unpredictable behavior. In order to improve the performance of a dynamic system or avoid the chaotic phenomena, it is necessary to control a chaotic system with a periodic motion beneficial for working with a particular condition. As chaotic signals are usually broadband and noise like, synchronized chaotic systems can be used as cipher generators for secure communication. This paper presents chaos synchronization of two identical chaotic motions of symmetric gyroscopes. In this paper, the switching surfaces are adopted to ensure the stability of the error dynamics in variable structure control. Using the neural variable structure control technique, control laws are established which guarantees the chaos control and the generalized projective synchronization of unknown gyroscope systems. In the neural variable structure control, Gaussian radial basis functions are utilized to on-line estimate the system dynamic functions. Also, the adaptation laws of the on-line estimator are derived in the sense of Lyapunov function. Thus, the unknown gyro systems can be guaranteed to be asymptotically stable. Also, the proposed method can achieve the control objectives. Numerical simulations are presented to
DEFF Research Database (Denmark)
Jacobsen, Martin; Martinussen, Torben
2016-01-01
Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Directory of Open Access Journals (Sweden)
Anthony C. Akpanta
2017-11-01
Full Text Available The act of violence against wife is condemnable and attracts various legal penalties, globally. This article attempts to find a link between spousal age difference and violence (Emotional, Physical and Sexual against wives in Nigeria. The result show that wives who are older than their partners are more likely to experience sexual and emotional violence; also, wives who are same age as their husbands are more likely to experience sexual violence; whereas wives who are 1-4 years younger than their husbands are more likely to experience physical violence; while wives 5 years or more younger than their husbands are generally less likely to experience any form of violence.
International Nuclear Information System (INIS)
Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta
2014-01-01
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute
Energy Technology Data Exchange (ETDEWEB)
Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y
2008-02-18
The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.
Fischer, Sophia; Soyez, Katja; Gurtner, Sebastian
2015-05-01
Research testing the concept of decision-making styles in specific contexts such as health care-related choices is missing. Therefore, we examine the contextuality of Scott and Bruce's (1995) General Decision-Making Style Inventory with respect to patient choice situations. Scott and Bruce's scale was adapted for use as a patient decision-making style inventory. In total, 388 German patients who underwent elective joint surgery responded to a questionnaire about their provider choice. Confirmatory factor analyses within 2 independent samples assessed factorial structure, reliability, and validity of the scale. The final 4-dimensional, 13-item patient decision-making style inventory showed satisfactory psychometric properties. Data analyses supported reliability and construct validity. Besides the intuitive, dependent, and avoidant style, a new subdimension, called "comparative" decision-making style, emerged that originated from the rational dimension of the general model. This research provides evidence for the contextuality of decision-making style to specific choice situations. Using a limited set of indicators, this report proposes the patient decision-making style inventory as valid and feasible tool to assess patients' decision propensities. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Vivek Singh Bawa
2017-06-01
Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Directory of Open Access Journals (Sweden)
William Alvarez Gaviria
2004-09-01
Full Text Available The origin of climacteric has been subject of debate. Most opinions agree in that it arises exclusively from natural selection. In this paper the author argues that, besides this reason there is another, even more important; for him, climacteric is the final response to fatigue or the third stage of the general adaptation syndrome, just as in elderly people there is a loss of the capacity of proliferation of fibroblasts and lack of response to insulin. From a genetic point of view, this corresponds to an antagonic pleiotropy: the genetic program that has made the human adrenergic and corticotropic systems hyperactive, has also caused that they do not reach senescence intact. High concentrations of stress hormones during youth and adulthood in humans, as compared to chimpanzees, gorillas and orangutans, and the hormonal cascade reactions elicited by them are meaningfully related to our most conspicuous illnesses, our genotype/phenotype and, in the long term, with climacteric. Se ha conjeturado a menudo sobre las razones del climaterio y la mayoría de los autores sostiene que es un fenómeno que surge exclusivamente de la selección natural. Aquí asumimos que, aunque esa sea parte de la explicación, no es la razón primordial. Así como con la edad se da la pérdida, por ejemplo, de la capacidad proliferativa de los fibroblastos y de la sensibilidad a la insulina, el climaterio podría corresponder no más que a la fatiga o tercera etapa del Síndrome de Adaptación General. En un enfoque genético correspondería, pues, a una pleiotropía antagónica: el programa genético que ha hecho hiperactivos a los sistemas adrenérgico y corticotrópico del ser humano, evitaría también que llegara incólume al punto final de senescencia. Las altas concentraciones de hormonas de estrés en la juventud y la edad adulta que distinguen a nuestra especie, comparada con el chimpancé, el gorila y el orangután, y las reacciones hormonales en cascada que
Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe
2008-10-01
This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.
International Nuclear Information System (INIS)
Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe
2008-01-01
This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.
Anikhovskaya, I A; Dvoenosov, V G; Zhdanov, R I; Koubatiev, A A; Mayskiy, I A; Markelova, M M; Meshkov, M V; Oparina, O N; Salakhov, I M; Yakovlev, M Yu
2015-01-01
General adaptation syndrome (GAS), the basis of the development of which is stress phenomenon, is an essential component of the pathogenesis of many diseases and syndromes. However, the patho genesis of GAS hitherto is considered exclusively from the endocrinological viewpoint. This relates primarily to the initial phase of the GAS, a clinical model for the study of which may be psycho-emotional stress (PES), which we studied using three groups of volunteers. The first one consists of 25 students who were waiting for unaccustomed physical activity (17 men) and play debut on the stage (8 women). The second group consists of 48 children (2-14 years) who expected for "planned" surgery. The third group of volunteers is made up of 80 students (41 women and 39 men) during the first exam. The concentration of cortisol, endotoxin (ET), the activity of antiendotoxin immunity (AEI) and the haemostatic system parameters were determined in the blood serum of volunteers in various combinations. We found laboratory evidence for PES at 92% of students of the first group, 58% of children of the second one and in 21% of students of the third group of volunteers (mostly women). The concentration of ET increased at 13 (52%) volunteers of the first group with a significant increase of average indicators in the whole group (from 0.84 ± 0.06 to 1.19 ± 0.04 EU/ml). At children of the second group, the average concentration of ET increased even more significantly (from 0.42 ± 0.02 to 1.63 ± 0.11 EU/ml), which was accompanied by the activation of the hemostasis system. A degree of the activation was directly dependent on the level of ET in the general circulation and on an activity of AEI. Examination stress in the third group of volunteers is accompanied by activation of plasma hemostasis (increased initial thrombosis rate and reduced the time it starts, lag-period) in 26% of female students and 15% of male students. We suggest that it is possible to use the PES as a clinical model
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Simon, Patrick; Schneider, Peter
2017-08-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Energy Technology Data Exchange (ETDEWEB)
Manrique, John Peter O.; Costa, Alessandro M., E-mail: johnp067@usp.br, E-mail: amcosta@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil)
2016-07-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR{sub 20/10}. (author)
Lieberman, Lauren; Lucas, Mark; Jones, Jeffery; Humphreys, Dan; Cody, Ann; Vaughn, Bev; Storms, Tommie
2013-01-01
"Helping General Physical Educators and Adapted Physical Educators Address the Office of Civil Rights Dear Colleague Guidance Letter: Part IV--Sport Groups" provides the the following articles: (1) "Sport Programming Offered by Camp Abilities and the United States Association for Blind Athletes" (Lauren Lieberman and Mark…
International Nuclear Information System (INIS)
Sanchez, Richard.
1975-04-01
For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr
Reesman, Jennifer; Gray, Robert; Suskauer, Stacy J; Ferenc, Lisa M; Kossoff, Eric H; Lin, Doris D M; Turin, Elizabeth; Comi, Anne M; Brice, Patrick J; Zabel, T Andrew
2009-06-01
This study sought to identify neurologic correlates of adaptive functioning in individuals with Sturge-Weber syndrome. A total of 18 children, adolescents, and young adults with Sturge-Weber syndrome with brain involvement were recruited from our Sturge-Weber center. All underwent neurologic examination (including review of clinical brain magnetic resonance imaging) and neuropsychological assessment. Neuropsychological assessment included measures of intellectual ability and standardized parent report of adaptive functioning. Overall, Full Scale IQ and ratings of global adaptive functioning were both lower than the population-based norms (P adaptive functioning ratings, clinician ratings of cortical abnormality, and ratings of neurologic status. Hemiparesis (minimal versus prominent) was the only individual component of the rating scales that differentiated between individuals with nonimpaired and impaired adaptive functioning scores. Information obtained during neurological examination of children and adolescents with Sturge-Weber syndrome particularly hemiparetic status is useful for identifying children who may need additional intervention.
Sidorova, Iu S; Seliaskin, K E; Zorin, S N; Abramova, L S; Mazo, V K
2014-01-01
The impact of the 15-day consumption of enzymatic hydrolyzate of the mussels meat as a part of semi-synthetic diet on some stress biomarkers and apoptosis activity in various organs of growing male Wistar rats have been studied. Enzymatic hydrolyzate of the mussels meat (EMM) was obtained in pilot conditions using the enzyme preparation "Protozim". The animals of control group 1 (n = 8 with initial body weight of 179.4 ± 5.9 g) and experimental group 2 (n = 8, 176.3 ± 4.5 g) received a semi synthetic diet; the animals of the experimental group 3 (n = 8, 177.6 ± 4.0 g) received the same semi synthetic diet in which 50% of the casein was replaced by the peptides of EMM. On the penult day of the experiment animals of groups 2 and 3 were subjected to stress exposure by electric current on their paws (current 0.4 mA for 8 seconds) and were placed in metabolic cages for the collection of daily urine. At the 15th day of the study, all control and test animals were killed by decapitation under ether anesthesia and necropsied. The content of prostaglandin E2 and β-endorphin in blood plasma was determined by ELISA test. The concentration of urine corticosterone was measured by HPLC. DNA damage and percentage of apoptotic cells (apoptotic index) were calculated in thymus by single-cell gel electrophoresis assay (Comet assay). The relative body weight increase of animals treated with EMM was significantly (p general adaptation syndrome.
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-05-30
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth
2007-01-01
A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.
Energy Technology Data Exchange (ETDEWEB)
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Directory of Open Access Journals (Sweden)
A. A. Zarei
2016-03-01
Full Text Available Winter dens are one of the important components of brown bear's (Ursus arctos syriacus habitat, affecting their reproduction and survival. Therefore identification of factors affecting the habitat selection and suitable denning areas in the conservation of our largest carnivore is necessary. We used Geographically Weighted Logistic Regression (GWLR and Generalized Linear Model (GLM for modeling suitability of denning habitat in Kouhkhom region in Fars province. In the present research, 20 dens (presence locations and 20 caves where signs of bear were not found (absence locations were used as dependent variables and six environmental factors were used for each location as independent variables. The results of GLM showed that variables of distance to settlements, altitude, and distance to water were the most important parameters affecting suitability of the brown bear's denning habitat. The results of GWLR showed the significant local variations in the relationship between occurrence of brown bear dens and the variable of distance to settlements. Based on the results of both models, suitable habitats for denning of the species are impassable areas in the mountains and inaccessible for humans.
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
Oktem, Figen S; Ozaktas, Haldun M
2010-08-01
Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.
International Nuclear Information System (INIS)
Chan, C.T.; Vanderbilt, D.; Louie, S.G.; Materials and Molecular Research Division, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720)
1986-01-01
We present a general self-consistency procedure formulated in momentum space for electronic structure and total-energy calculations of crystalline solids. It is shown that both the charge density and the change in the Hamiltonian matrix elements in each iteration can be calculated in a straight-forward fashion once a set of overlap matrices is computed. The present formulation has the merit of bringing the self-consistency problem for different basis sets to the same footing. The scheme is used to extend a first-principles pseudopotential linear combination of Gaussian orbitals method to full point-by-point self-consistency, without refitting of potentials. It is shown that the set of overlap matrices can be calculated very efficiently if we exploit the translational and space-group symmetries of the system under consideration. This scheme has been applied to study the structural and electronic properties of Si and W, prototypical systems of very different bonding properties. The results agree well with experiment and other calculations. The fully self-consistent results are compared with those obtained by a variational procedure [J. R. Chelikowsky and S. G. Louie, Phys. Rev. B 29, 3470 (1984)]. We find that the structural properties for bulk Si and W (both systems have no interatomic charge transfer) can be treated accurately by the variational procedure. However, full self-consistency is needed for an accurate description of the band energies
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Lobos, G; Schnettler, B; Grunert, K G; Adasme, C
2017-01-01
The main objective of this study is to show why perceived resources are a strong predictor of satisfaction with food-related life in Chilean older adults. Design, sampling and participants: A survey was conducted in rural and urban areas in 30 communes of the Maule Region with 785 participants over 60 years of age who live in their own homes. The Satisfaction with Food-related Life (SWFL) scale was used. Generalized linear models (GLM) were used for the regression analysis. The results led to different considerations: First, older adults' perceived levels of resources are a good reflection of their actual levels of resources. Second, the individuals rated the sum of the perceived resources as 'highly important' to explain older adults' satisfaction with food-related life. Third, SWFL was predicted by satisfaction with economic situation, family importance, quantity of domestic household goods and a relative health indicator. Fourth, older adults who believe they have more resources compared to others are more satisfied with their food-related life. Finally, Poisson and binomial logistic models showed that the sum of perceived resources significantly increased the prediction of SWFL. The main conclusion is that perceived personal resources are a strong predictor of SWFL in Chilean older adults.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Stochl, Jan; Böhnke, Jan R; Pickett, Kate E; Croudace, Tim J
2016-05-20
Recent developments in psychometric modeling and technology allow pooling well-validated items from existing instruments into larger item banks and their deployment through methods of computerized adaptive testing (CAT). Use of item response theory-based bifactor methods and integrative data analysis overcomes barriers in cross-instrument comparison. This paper presents the joint calibration of an item bank for researchers keen to investigate population variations in general psychological distress (GPD). Multidimensional item response theory was used on existing health survey data from the Scottish Health Education Population Survey (n = 766) to calibrate an item bank consisting of pooled items from the short common mental disorder screen (GHQ-12) and the Affectometer-2 (a measure of "general happiness"). Computer simulation was used to evaluate usefulness and efficacy of its adaptive administration. A bifactor model capturing variation across a continuum of population distress (while controlling for artefacts due to item wording) was supported. The numbers of items for different required reliabilities in adaptive administration demonstrated promising efficacy of the proposed item bank. Psychometric modeling of the common dimension captured by more than one instrument offers the potential of adaptive testing for GPD using individually sequenced combinations of existing survey items. The potential for linking other item sets with alternative candidate measures of positive mental health is discussed since an optimal item bank may require even more items than these.
Quaternion Linear Canonical Transform Application
Bahri, Mawardi
2015-01-01
Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter
Solution of linear ill-posed problems using overcomplete dictionaries
Pensky, Marianna
2016-01-01
In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. Construction of an adaptive optimal solution for such problems usually relies either on a singular value decomposition or representation of the solution via an orthonormal basis. The shortcoming of both approaches lies in the fact that, in many situations, neither the eigenbasis of the linear operator nor a standard orthonormal basis constitutes an appropriate co...
International Nuclear Information System (INIS)
Dai, Hao; Si, Gangquan; Jia, Lixin; Zhang, Yanbin
2013-01-01
This paper investigates generalized function matrix projective lag synchronization between fractional-order and integer-order complex networks with delayed coupling, non-identical topological structures and different dimensions. Based on Lyapunov stability theory, generalized function matrix projective lag synchronization criteria are derived by using the adaptive control method. In addition, the three-dimensional fractional-order chaotic system and the four-dimensional integer-order hyperchaotic system as the nodes of the drive and the response networks, respectively, are analyzed in detail, and numerical simulation results are presented to illustrate the effectiveness of the theoretical results. (paper)
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele
2017-06-01
The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.
Lin, Zi-Jing; Li, Lin; Cazzell, Marry; Liu, Hanli
2013-03-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive imaging technique which measures the hemodynamic changes that reflect the brain activity. Diffuse optical tomography (DOT), a variant of fNIRS with multi-channel NIRS measurements, has demonstrated capability of three dimensional (3D) reconstructions of hemodynamic changes due to the brain activity. Conventional method of DOT image analysis to define the brain activation is based upon the paired t-test between two different states, such as resting-state versus task-state. However, it has limitation because the selection of activation and post-activation period is relatively subjective. General linear model (GLM) based analysis can overcome this limitation. In this study, we combine the 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with the risk-decision making process. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The balloon analogue risk task (BART) is a valid experimental model and has been commonly used in behavioral measures to assess human risk taking action and tendency while facing risks. We have utilized the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making. Voxel-wise GLM analysis was performed on 18human participants (10 males and 8females).In this work, we wish to demonstrate the feasibility of using voxel-wise GLM analysis to image and study cognitive functions in response to risk decision making by DOT. Results have shown significant changes in the dorsal lateral prefrontal cortex (DLPFC) during the active choice mode and a different hemodynamic pattern between genders, which are in good agreements with published literatures in functional magnetic resonance imaging (fMRI) and fNIRS studies.
Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy
2016-06-01
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment.
Willis, Thomas A; Hartley, Suzanne; Glidewell, Liz; Farrin, Amanda J; Lawton, Rebecca; McEachan, Rosemary R C; Ingleson, Emma; Heudtlass, Peter; Collinson, Michelle; Clamp, Susan; Hunter, Cheryl; Ward, Vicky; Hulme, Claire; Meads, David; Bregantini, Daniele; Carder, Paul; Foy, Robbie
2016-02-29
There are recognised gaps between evidence and practice in general practice, a setting which provides particular challenges for implementation. We earlier screened clinical guideline recommendations to derive a set of 'high impact' indicators based upon criteria including potential for significant patient benefit, scope for improved practice and amenability to measurement using routinely collected data. We aim to evaluate the effectiveness and cost-effectiveness of a multifaceted, adaptable intervention package to implement four targeted, high impact recommendations in general practice. The research programme Action to Support Practice Implement Research Evidence (ASPIRE) includes a pair of pragmatic cluster-randomised trials which use a balanced incomplete block design. Clusters are general practices in West Yorkshire, United Kingdom (UK), recruited using an 'opt-out' recruitment process. The intervention package adapted to each recommendation includes combinations of audit and feedback, educational outreach visits and computerised prompts with embedded behaviour change techniques selected on the basis of identified needs and barriers to change. In trial 1, practices are randomised to adapted interventions targeting either diabetes control or risky prescribing and those in trial 2 to adapted interventions targeting either blood pressure control in patients at risk of cardiovascular events or anticoagulation in atrial fibrillation. The respective primary endpoints comprise achievement of all recommended target levels of haemoglobin A1c (HbA1c), blood pressure and cholesterol in patients with type 2 diabetes, a composite indicator of risky prescribing, achievement of recommended blood pressure targets for specific patient groups and anticoagulation prescribing in patients with atrial fibrillation. We are also randomising practices to a fifth, non-intervention control group to further assess Hawthorne effects. Outcomes will be assessed using routinely collected data
Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin; Zhdanov, Michael S.
2017-12-01
The induced polarization (IP) method has been widely used in geophysical exploration to identify the chargeable targets such as mineral deposits. The inversion of the IP data requires modeling the IP response of 3D dispersive conductive structures. We have developed an edge-based finite-element time-domain (FETD) modeling method to simulate the electromagnetic (EM) fields in 3D dispersive medium. We solve the vector Helmholtz equation for total electric field using the edge-based finite-element method with an unstructured tetrahedral mesh. We adopt the backward propagation Euler method, which is unconditionally stable, with semi-adaptive time stepping for the time domain discretization. We use the direct solver based on a sparse LU decomposition to solve the system of equations. We consider the Cole-Cole model in order to take into account the frequency-dependent conductivity dispersion. The Cole-Cole conductivity model in frequency domain is expanded using a truncated Padé series with adaptive selection of the center frequency of the series for early and late time. This approach can significantly increase the accuracy of FETD modeling.
Energy Technology Data Exchange (ETDEWEB)
Szadkowski, Zbigniew [University of Lodz, Department of Physics and Applied Informatics, 90-236 Lodz, (Poland)
2015-07-01
We present the new approach to a filtering of radio frequency interferences (RFI) in the Auger Engineering Radio Array (AERA) which study the electromagnetic part of the Extensive Air Showers. The radio stations can observe radio signals caused by coherent emissions due to geomagnetic radiation and charge excess processes. AERA observes frequency band from 30 to 80 MHz. This range is highly contaminated by human-made RFI. In order to improve the signal to noise ratio RFI filters are used in AERA to suppress this contamination. The first kind of filter used by AERA was the Median one, based on the Fast Fourier Transform (FFT) technique. The second one, which is currently in use, is the infinite impulse response (IIR) notch filter. The proposed new filter is a finite impulse response (FIR) filter based on a linear prediction (LP). A periodic contamination hidden in a registered signal (digitized in the ADC) can be extracted and next subtracted to make signal cleaner. The FIR filter requires a calculation of n=32, 64 or even 128 coefficients (dependent on a required speed or accuracy) by solving of n linear equations with coefficients built from the covariance Toeplitz matrix. This matrix can be solved by the Levinson recursion, which is much faster than the Gauss procedure. The filter has been already tested in the real AERA radio stations on Argentinean pampas with a very successful results. The linear equations were solved either in the virtual soft-core NIOSR processor (implemented in the FPGA chip as a net of logic elements) or in the external Voipac PXA270M ARM processor. The NIOS processor is relatively slow (50 MHz internal clock), calculations performed in an external processor consume a significant amount of time for data exchange between the FPGA and the processor. Test showed a very good efficiency of the RFI suppression for stationary (long-term) contaminations. However, we observed a short-time contaminations, which could not be suppressed either by the
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
Energy Technology Data Exchange (ETDEWEB)
Szadkowski, Zbigniew, E-mail: zszadkow@kfd2.phys.uni.lodz.pl [University of Lodz, Department of Physics and Applied Informatics (Poland); Fraenkel, E.D. [Kernfysisch Versneller Instituut of the University of Groningen, Groningen (Netherlands); Glas, Dariusz; Legumina, Remigiusz [University of Lodz, Department of Physics and Applied Informatics (Poland)
2013-12-21
The electromagnetic part of an extensive air shower developing in the atmosphere provides significant information complementary to that obtained by water Cherenkov detectors which are predominantly sensitive to the muonic content of an air shower at ground. The emissions can be observed in the frequency band between 10 and 100 MHz. However, this frequency range is significantly contaminated by narrow-band RFI and other human-made distortions. The Auger Engineering Radio Array currently suppresses the RFI by multiple time-to-frequency domain conversions using an FFT procedure as well as by a set of manually chosen IIR notch filters in the time-domain. An alternative approach developed in this paper is an adaptive FIR filter based on linear prediction (LP). The coefficients for the linear predictor are dynamically refreshed and calculated in the virtual NIOS processor. The radio detector is an autonomous system installed on the Argentinean pampas and supplied from a solar panel. Powerful calculation capacity inside the FPGA is a factor. Power consumption versus the degree of effectiveness of the calculation inside the FPGA is a figure of merit to be minimized. Results show that the RFI contamination can be significantly suppressed by the LP FIR filter for 64 or less stages. -- Highlights: • We propose an adaptive method using linear prediction for periodic RFI suppression. • Requirements are the detection of short transient signals powered by solar panels. • The RFI is significantly suppressed by ∼70%, even in a very contaminated environment. • This method consumes less energy than the current method based on FFT used in AERA. • Distortion of the short transient signals is negligible.
Directory of Open Access Journals (Sweden)
Qiang Fu
2018-05-01
Full Text Available The potential influence of natural variations in a climate system on global warming can change the hydrological cycle and threaten current strategies of water management. A simulation-based linear fractional programming (SLFP model, which integrates a runoff simulation model (RSM into a linear fractional programming (LFP framework, is developed for optimal water resource planning. The SLFP model has multiple objectives such as benefit maximization and water supply minimization, balancing water conflicts among various water demand sectors, and addressing complexities of water resource allocation system. Lingo and Excel programming solutions were used to solve the model. Water resources in the main stream basin of the Songhua River are allocated for 4 water demand sectors in 8 regions during two planning periods under different scenarios. Results show that the increase or decrease of water supply to the domestic sector is related to the change in population density at different regions in different target years. In 2030, the water allocation in the industrial sector decreased by 1.03–3.52% compared with that in 2020, while the water allocation in the environmental sector increased by 0.12–1.29%. Agricultural water supply accounts for 54.79–77.68% of total water supply in different regions. These changes in water resource allocation for various sectors were affected by different scenarios in 2020; however, water resource allocation for each sector was relatively stable under different scenarios in 2030. These results suggest that the developed SLFP model can help to improve the adjustment of water use structure and water utilization efficiency.
Directory of Open Access Journals (Sweden)
Irina-Alina Preda
2008-11-01
Full Text Available In this article, we analyze the causes that have led to the improvement of the Romanian general accounting plan according to the Activity- Based Costing (ABC method. We explain the advantages presented by the dissociated organization of management accounting, in contrast with the tabular- statistical form. The article also describes the methodological steps to be taken in the process of recording book entries, according to the Activity-Based Costing (ABC method in Romania.
Adaptive H∞ Chaos Anti-synchronization
International Nuclear Information System (INIS)
Ahn, Choon Ki
2010-01-01
A new adaptive H ∞ anti-synchronization (AHAS) method is proposed for chaotic systems in the presence of unknown parameters and external disturbances. Based on the Lyapunov theory and linear matrix inequality formulation, the AHAS controller with adaptive laws of unknown parameters is derived to not only guarantee adaptive anti-synchronization but also reduce the effect of external disturbances to an H ∞ norm constraint. As an application of the proposed AHAS method, the H ∞ anti-synchronization problem for Genesio–Tesi chaotic systems is investigated. (general)
Carlson, James E.
2014-01-01
Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…
Carr, Joseph
1996-01-01
The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Eied, A. A.
2018-05-01
In this paper, the linear entropy and collapse-revival phenomenon through the relation ( -{\\bar{n}}) in a system of N-configuration four-level atom interacting with a single-mode field with additional forms of nonlinearities of both the field and the intensity-dependent atom-field coupling functional are investigated. A factorization of the initial density operator is assumed, considering the field to be initially in a squeezed coherent states and the atom initially in its most upper excited state. The dynamical behavior of the linear entropy and the time evolution of ( -{\\bar{n}}) are analyzed. In particular, the effects of the mean photon number, detuning, Kerr-like medium and the intensity-dependent coupling functional on the entropy and the evolution of ( -{\\bar{n}}) are examined.
DEFF Research Database (Denmark)
Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide
. This results in a decreased number of single point calculations required during the potential construction. Especially the Morse-like fit-basis functions are of interest, when combined with rectilinear hybrid optimized and localized coordinates (HOLCs), which can be generated as orthogonal transformations......The overall shape of a molecular energy surface can be very different for different molecules and different vibrational coordinates. This means that the fit-basis functions used to generate an analytic representation of a potential will be met with different requirements. It is therefore worthwhile...... single point calculations when constructing the molecular potential. We therefore present a uniform framework that can handle general fit-basis functions of any type which are specified on input. This framework is implemented to suit the black-box nature of the ADGA in order to avoid arbitrary choices...
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
International Nuclear Information System (INIS)
Kachelriess, Marc; Watzke, Oliver; Kalender, Willi A.
2001-01-01
In modern computed tomography (CT) there is a strong desire to reduce patient dose and/or to improve image quality by increasing spatial resolution and decreasing image noise. These are conflicting demands since increasing resolution at a constant noise level or decreasing noise at a constant resolution level implies a higher demand on x-ray power and an increase of patient dose. X-ray tube power is limited due to technical reasons. We therefore developed a generalized multi-dimensional adaptive filtering approach that applies nonlinear filters in up to three dimensions in the raw data domain. This new method differs from approaches in the literature since our nonlinear filters are applied not only in the detector row direction but also in the view and in the z-direction. This true three-dimensional filtering improves the quantum statistics of a measured projection value proportional to the third power of the filter size. Resolution tradeoffs are shared among these three dimensions and thus are considerably smaller as compared to one-dimensional smoothing approaches. Patient data of spiral and sequential single- and multi-slice CT scans as well as simulated spiral cone-beam data were processed to evaluate these new approaches. Image quality was assessed by evaluation of difference images, by measuring the image noise and the noise reduction, and by calculating the image resolution using point spread functions. The use of generalized adaptive filters helps to reduce image noise or, alternatively, patient dose. Image noise structures, typically along the direction of the highest attenuation, are effectively reduced. Noise reduction values of typically 30%-60% can be achieved in noncylindrical body regions like the shoulder. The loss in image resolution remains below 5% for all cases. In addition, the new method has a great potential to reduce metal artifacts, e.g., in the hip region
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Nestler, Eric J
2016-08-15
In 1991 we demonstrated that chronic morphine exposure increased levels of adenylyl cyclase and protein kinase A (PKA) in several regions of the rat central nervous system as inferred from measures of enzyme activity in crude extracts (Terwilliger et al., 1991). These findings led us to hypothesize that a concerted upregulation of the cAMP pathway is a general mechanism of opiate tolerance and dependence. Moreover, in the same study we showed similar induction of adenylyl cyclase and PKA activity in nucleus accumbens (NAc) in response to chronic administration of cocaine, but not of several non-abused psychoactive drugs. Morphine and cocaine also induced equivalent changes in inhibitory G protein subunits in this brain region. We thus extended our hypothesis to suggest that, particularly within brain reward regions such as NAc, cAMP pathway upregulation represents a common mechanism of reward tolerance and dependence shared by several classes of drugs of abuse. Research since that time, by many laboratories, has provided substantial support for these hypotheses. Specifically, opiates in several CNS regions including NAc, and cocaine more selectively in NAc, induce expression of certain adenylyl cyclase isoforms and PKA subunits via the transcription factor, CREB, and these transcriptional adaptations serve a homeostatic function to oppose drug action. In certain brain regions, such as locus coeruleus, these adaptations mediate aspects of physical opiate dependence and withdrawal, whereas in NAc they mediate reward tolerance and dependence that drives increased drug self-administration. This work has had important implications for understanding the molecular basis of addiction. "A general role for adaptations in G-proteins and the cyclic AMP system in mediating the chronic actions of morphine and cocaine on neuronal function". Previous studies have shown that chronic morphine increases levels of the G-protein subunits Giα and Goα, adenylate cyclase, cyclic AMP
International Nuclear Information System (INIS)
Petit, Andrew S.; Subotnik, Joseph E.
2014-01-01
In this paper, we develop a surface hopping approach for calculating linear absorption spectra using ensembles of classical trajectories propagated on both the ground and excited potential energy surfaces. We demonstrate that our method allows the dipole-dipole correlation function to be determined exactly for the model problem of two shifted, uncoupled harmonic potentials with the same harmonic frequency. For systems where nonadiabatic dynamics and electronic relaxation are present, preliminary results show that our method produces spectra in better agreement with the results of exact quantum dynamics calculations than spectra obtained using the standard ground-state Kubo formalism. As such, our proposed surface hopping approach should find immediate use for modeling condensed phase spectra, especially for expensive calculations using ab initio potential energy surfaces
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Lundbye-Christensen, S; Dethlefsen, C; Gorst-Rasmussen, A; Fischer, T; Schønheyder, H C; Rothman, K J; Sørensen, H T
2009-01-01
Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation, and overdispersion. The model resembles an ordinary time series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. During the 1983-1999 period, 17,989 incidents of acute myocardial infarction were recorded in the Hospital Discharge Registry for the county of North Jutland, Denmark. Records were updated daily. A dynamic model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicated a good model fit. The analysis conducted with the dynamic model revealed peaks coinciding with above-average influenza A activity. On average the dynamic model estimated a higher peak-to-trough ratio than traditional models, and showed gradual changes in seasonal patterns. Analyses conducted with this model provide insights not available from more traditional approaches.
International Nuclear Information System (INIS)
Mikhailovskii, A.B.
1986-01-01
Some general problems of the theory of Alfven instabilities of a tokamak with high-energy ions are considered. It is assumed that such ions are due to either ionization of fast neutral atoms, injected into the tokamak, or production of them under thermo-nuclear conditions. Small-oscillation equations are derived for the Alfven-type waves, which allow for both destabilizing effects, associated with the high-energy particles, and stabilizing ones, such as effects of shear and bulk-plasm dissipation. A high-energy ion contribution is calculated into the growth rate of the Alfven waves. The author considers the role of trapped-electron collisional dissipation
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Monahan, John F
2008-01-01
Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Directory of Open Access Journals (Sweden)
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships