WorldWideScience

Sample records for hierarchical generalized linear

  1. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  2. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  3. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Determination of a Differential Item Functioning Procedure Using the Hierarchical Generalized Linear Model

    Directory of Open Access Journals (Sweden)

    Tülin Acar

    2012-01-01

    Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.

  5. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  6. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Generalized Linear Covariance Analysis

    Science.gov (United States)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  8. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  9. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  10. Integrating Linear Programming and Analytical Hierarchical ...

    African Journals Online (AJOL)

    Study area is about 28000 ha of Keleibar- Chai Watershed, located in eastern Azerbaijan, Iran. Socio-economic information collected through a two-stage survey of 19 villages, including 300 samples. Thematic maps also have summarized Ecological factors, including physical and economic data. A comprehensive Linear ...

  11. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  12. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  13. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  14. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  15. Hierarchical and Non-Hierarchical Linear and Non-Linear Clustering Methods to “Shakespeare Authorship Question”

    Directory of Open Access Journals (Sweden)

    Refat Aljumily

    2015-09-01

    Full Text Available A few literary scholars have long claimed that Shakespeare did not write some of his best plays (history plays and tragedies and proposed at one time or another various suspect authorship candidates. Most modern-day scholars of Shakespeare have rejected this claim, arguing that strong evidence that Shakespeare wrote the plays and poems being his name appears on them as the author. This has caused and led to an ongoing scholarly academic debate for quite some long time. Stylometry is a fast-growing field often used to attribute authorship to anonymous or disputed texts. Stylometric attempts to resolve this literary puzzle have raised interesting questions over the past few years. The following paper contributes to “the Shakespeare authorship question” by using a mathematically-based methodology to examine the hypothesis that Shakespeare wrote all the disputed plays traditionally attributed to him. More specifically, the mathematically based methodology used here is based on Mean Proximity, as a linear hierarchical clustering method, and on Principal Components Analysis, as a non-hierarchical linear clustering method. It is also based, for the first time in the domain, on Self-Organizing Map U-Matrix and Voronoi Map, as non-linear clustering methods to cover the possibility that our data contains significant non-linearities. Vector Space Model (VSM is used to convert texts into vectors in a high dimensional space. The aim of which is to compare the degrees of similarity within and between limited samples of text (the disputed plays. The various works and plays assumed to have been written by Shakespeare and possible authors notably, Sir Francis Bacon, Christopher Marlowe, John Fletcher, and Thomas Kyd, where “similarity” is defined in terms of correlation/distance coefficient measure based on the frequency of usage profiles of function words, word bi-grams, and character triple-grams. The claim that Shakespeare authored all the disputed

  16. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  17. Linear versus non-linear supersymmetry, in general

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  18. Linear versus non-linear supersymmetry, in general

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  19. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  20. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  1. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  2. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  3. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement

    Science.gov (United States)

    von Secker, Clare Elaine

    The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well.

  4. Generalized Cross-Gramian for Linear Systems

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    2012-01-01

    The cross-gramian is a well-known matrix with embedded controllability and observability information. The cross-gramian is related to the Hankel operator and the Hankel singular values of a linear square system and it has several interesting properties. These properties make the cross...... square symmetric systems, the ordinary cross-gramian does not exist. To cope with this problem, a new generalized cross-gramian is introduced in this paper. In contrast to the ordinary cross-gramian, the generalized cross-gramian can be easily obtained for general linear systems and therefore can be used...

  5. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  6. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  7. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  8. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  9. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  10. Generalized non-linear Schroedinger hierarchy

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-01-01

    The importance in studying the completely integrable models have became evident in the last years due to the fact that those models present an algebraic structure extremely rich, providing the natural scenery for solitons description. Those models can be described through non-linear differential equations, pseudo-linear operators (Lax formulation), or a matrix formulation. The integrability implies in the existence of a conservation law associated to each of degree of freedom. Each conserved charge Q i can be associated to a Hamiltonian, defining a time evolution related to to a time t i through the Hamilton equation ∂A/∂t i =[A,Q i ]. Particularly, for a two-dimensions field theory, infinite degree of freedom exist, and consequently infinite conservation laws describing the time evolution in space of infinite times. The Hamilton equation defines a hierarchy of models which present a infinite set of conservation laws. This paper studies the generalized non-linear Schroedinger hierarchy

  11. Assessing exposure to violence using multiple informants: application of hierarchical linear model.

    Science.gov (United States)

    Kuo, M; Mohler, B; Raudenbush, S L; Earls, F J

    2000-11-01

    The present study assesses the effects of demographic risk factors on children's exposure to violence (ETV) and how these effects vary by informants. Data on exposure to violence of 9-, 12-, and 15-year-olds were collected from both child participants (N = 1880) and parents (N = 1776), as part of the assessment of the Project on Human Development in Chicago Neighborhoods (PHDCN). A two-level hierarchical linear model (HLM) with multivariate outcomes was employed to analyze information obtained from these two different groups of informants. The findings indicate that parents generally report less ETV than do their children and that associations of age, gender, and parent education with ETV are stronger in the self-reports than in the parent reports. The findings support a multivariate approach when information obtained from different sources is being integrated. The application of HLM allows an assessment of interactions between risk factors and informants and uses all available data, including data from one informant when data from the other informant is missing.

  12. General solution of linear vector supersymmetry

    International Nuclear Information System (INIS)

    Blasi, Alberto; Maggiore, Nicola

    2007-01-01

    We give the general solution of the Ward identity for the linear vector supersymmetry which characterizes all topological models. Such a solution, whose expression is quite compact and simple, greatly simplifies the study of theories displaying a supersymmetric algebraic structure, reducing to a few lines the proof of their possible finiteness. In particular, the cohomology technology, usually involved for the quantum extension of these theories, is completely bypassed. The case of Chern-Simons theory is taken as an example

  13. Identification of general linear mechanical systems

    Science.gov (United States)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1983-01-01

    Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.

  14. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  15. Gravitational Wave in Linear General Relativity

    Science.gov (United States)

    Cubillos, D. J.

    2017-07-01

    General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.

  16. Aspects of general linear modelling of migration.

    Science.gov (United States)

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  17. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  18. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  20. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    Science.gov (United States)

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  1. A Hierarchical Bayesian Setting for an Inverse Problem in Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2016-05-12

    In this work we develop a Bayesian setting to infer unknown parameters in initial-boundary value problems related to linear parabolic partial differential equations. We realistically assume that the boundary data are noisy, for a given prescribed initial condition. We show how to derive the joint likelihood function for the forward problem, given some measurements of the solution field subject to Gaussian noise. Given Gaussian priors for the time-dependent Dirichlet boundary values, we analytically marginalize the joint likelihood using the linearity of the equation. Our hierarchical Bayesian approach is fully implemented in an example that involves the heat equation. In this example, the thermal diffusivity is the unknown parameter. We assume that the thermal diffusivity parameter can be modeled a priori through a lognormal random variable or by means of a space-dependent stationary lognormal random field. Synthetic data are used to test the inference. We exploit the behavior of the non-normalized log posterior distribution of the thermal diffusivity. Then, we use the Laplace method to obtain an approximated Gaussian posterior and therefore avoid costly Markov Chain Monte Carlo computations. Expected information gains and predictive posterior densities for observable quantities are numerically estimated using Laplace approximation for different experimental setups.

  2. Generalized 2-vector spaces and general linear 2-groups

    OpenAIRE

    Elgueta, Josep

    2008-01-01

    In this paper a notion of {\\it generalized 2-vector space} is introduced which includes Kapranov and Voevodsky 2-vector spaces. Various kinds of generalized 2-vector spaces are considered and examples are given. The existence of non free generalized 2-vector spaces and of generalized 2-vector spaces which are non Karoubian (hence, non abelian) categories is discussed, and it is shown how any generalized 2-vector space can be identified with a full subcategory of an (abelian) functor category ...

  3. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques

    Science.gov (United States)

    Ishitani, Terry T.

    2010-01-01

    This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…

  4. Motivation, Classroom Environment, and Learning in Introductory Geology: A Hierarchical Linear Model

    Science.gov (United States)

    Gilbert, L. A.; Hilpert, J. C.; Van Der Hoeven Kraft, K.; Budd, D.; Jones, M. H.; Matheney, R.; Mcconnell, D. A.; Perkins, D.; Stempien, J. A.; Wirth, K. R.

    2013-12-01

    Prior research has indicated that highly motivated students perform better and that learning increases in innovative, reformed classrooms, but untangling the student effects from the instructor effects is essential to understanding how to best support student learning. Using a hierarchical linear model, we examine these effects separately and jointly. We use data from nearly 2,000 undergraduate students surveyed by the NSF-funded GARNET (Geoscience Affective Research NETwork) project in 65 different introductory geology classes at research universities, public masters-granting universities, liberal arts colleges and community colleges across the US. Student level effects were measured as increases in expectancy and self-regulation using the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1991). Instructor level effects were measured using the Reformed Teaching Observation Protocol, (RTOP; Sawada et al., 2000), with higher RTOP scores indicating a more reformed, student-centered classroom environment. Learning was measured by learning gains on a Geology Concept Inventory (GCI; Libarkin and Anderson, 2005) and normalized final course grade. The hierarchical linear model yielded significant results at several levels. At the student level, increases in expectancy and self-regulation are significantly and positively related to higher grades regardless of instructor; the higher the increase, the higher the grade. At the instructor level, RTOP scores are positively related to normalized average GCI learning gains. The higher the RTOP score, the higher the average class GCI learning gains. Across both levels, average class GCI learning gains are significantly and positively related to student grades; the higher the GCI learning gain, the higher the grade. Further, the RTOP scores are significantly and negatively related to the relationship between expectancy and course grade. The lower the RTOP score, the higher the correlation between change in

  5. Smooth generalized linear models for aggregated data

    OpenAIRE

    Ayma Anza, Diego Armando

    2016-01-01

    Mención Internacional en el título de doctor Aggregated data commonly appear in areas such as epidemiology, demography, and public health. Generally, the aggregation process is done to protect the privacy of patients, to facilitate compact presentation, or to make it comparable with other coarser datasets. However, this process may hinder the visualization of the underlying distribution that follows the data. Also, it prohibits the direct analysis of relationships between ag...

  6. Generalized local homology and cohomology for linearly compact modules

    International Nuclear Information System (INIS)

    Tran Tuan Nam

    2006-07-01

    We study generalized local homology for linearly compact modules. By duality, we get some properties of generalized local cohomology modules and extend well-known properties of local cohomology of A. Grothendieck. (author)

  7. Teacher characteristics and student performance: An analysis using hierarchical linear modelling

    Directory of Open Access Journals (Sweden)

    Paula Armstrong

    2015-12-01

    Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.

  8. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    Science.gov (United States)

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  9. Merging K-means with hierarchical clustering for identifying general-shaped groups.

    Science.gov (United States)

    Peterson, Anna D; Ghosh, Arka P; Maitra, Ranjan

    2018-01-01

    Clustering partitions a dataset such that observations placed together in a group are similar but different from those in other groups. Hierarchical and K -means clustering are two approaches but have different strengths and weaknesses. For instance, hierarchical clustering identifies groups in a tree-like structure but suffers from computational complexity in large datasets while K -means clustering is efficient but designed to identify homogeneous spherically-shaped clusters. We present a hybrid non-parametric clustering approach that amalgamates the two methods to identify general-shaped clusters and that can be applied to larger datasets. Specifically, we first partition the dataset into spherical groups using K -means. We next merge these groups using hierarchical methods with a data-driven distance measure as a stopping criterion. Our proposal has the potential to reveal groups with general shapes and structure in a dataset. We demonstrate good performance on several simulated and real datasets.

  10. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  11. Generalized Multicarrier CDMA: Unification and Linear Equalization

    Directory of Open Access Journals (Sweden)

    Wang Zhengdao

    2005-01-01

    Full Text Available Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC quasisynchronous CDMA system that is capable of multiuser interference (MUI elimination and intersymbol interference (ISI suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.

  12. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  13. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  14. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  15. Minimal solution of general dual fuzzy linear systems

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Otadi, M.; Mosleh, M.

    2008-01-01

    Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered

  16. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  17. Role of Prefrontal Cortex in Learning and Generalizing Hierarchical Rules in 8-Month-Old Infants.

    Science.gov (United States)

    Werchan, Denise M; Collins, Anne G E; Frank, Michael J; Amso, Dima

    2016-10-05

    Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are largely unknown. We used near-infrared spectroscopy to record PFC activity in 8-month-old human infants during a simple audiovisual hierarchical-rule-learning task. Behavioral results confirmed that infants adopted hierarchical rule sets to learn and generalize spoken object-label mappings across different speaker contexts. Infants had increased activity over right dorsal lateral PFC when rule sets switched from one trial to the next, a neural marker related to updating rule sets into working memory in the adult literature. Infants' eye blink rate, a possible physiological correlate of striatal dopamine activity, also increased when rule sets switched from one trial to the next. Moreover, the increase in right dorsolateral PFC activity in conjunction with eye blink rate also predicted infants' generalization ability, providing exploratory evidence for frontostriatal involvement during learning. These findings provide evidence that PFC is involved in rudimentary hierarchical rule learning in 8-month-old infants, an ability that was previously thought to emerge later in life in concert with PFC maturation. Hierarchical rule learning is a powerful learning mechanism that allows rules to be selected in a context-appropriate fashion and transferred or reused in novel contexts. Data from computational models and adults suggests that this learning mechanism is supported by dopamine-innervated interactions between prefrontal cortex (PFC) and striatum. Here, we provide evidence that PFC also supports hierarchical rule learning during infancy, challenging the current dogma that PFC is an

  18. About one non linear generalization of the compression reflection ...

    African Journals Online (AJOL)

    Both cases of stage and spiral iterations are considered. A geometrical interpretation of a convergence of a generalize method of iteration is brought, the case of stage and spiral iterations are considered. The formula for the non linear generalize compression reflection operator as a function from one variable is obtained.

  19. McDonald Generalized Linear Failure Rate Distribution

    Directory of Open Access Journals (Sweden)

    Ibrahim Elbatal

    2014-10-01

    Full Text Available We introduce in this paper a new six-parameters generalized version of the generalized linear failure rate (GLFR distribution which is called McDonald Generalized Linear failure rate (McGLFR distribution. The new distribution is quite flexible and can be used effectively in modeling survival data and reliability problems. It can have a constant, decreasing, increasing, and upside down bathtub-and bathtub shaped failure rate function depending on its parameters. It includes some well-known lifetime distributions as special sub-models. Some structural properties of the new distribution are studied. Moreover we discuss maximum likelihood estimation of the unknown parameters of the new model.

  20. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  1. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  2. Human capital, social capital and scientific research in Europe: an application of linear hierarchical models

    OpenAIRE

    Mathieu Goudard; Michel Lubrano

    2011-01-01

    The theory of human capital is one way to explain individual decisions to produce scientific research. However, this theory, even if it reckons the importance of time in science, is too short for explaining the existing diversity of scientific output. The present paper introduces the social capital of Bourdieu (1980), Coleman (1988) and Putnam (1995) as a necessary complement to explain the creation of scientific human capital. This paper connects these two concepts by means of a hierarchical...

  3. An implicit spectral formula for generalized linear Schroedinger equations

    International Nuclear Information System (INIS)

    Schulze-Halberg, A.; Garcia-Ravelo, J.; Pena Gil, Jose Juan

    2009-01-01

    We generalize the semiclassical Bohr–Sommerfeld quantization rule to an exact, implicit spectral formula for linear, generalized Schroedinger equations admitting a discrete spectrum. Special cases include the position-dependent mass Schroedinger equation or the Schroedinger equation for weighted energy. Requiring knowledge of the potential and the solution associated with the lowest spectral value, our formula predicts the complete spectrum in its exact form. (author)

  4. Solution of generalized shifted linear systems with complex symmetric matrices

    International Nuclear Information System (INIS)

    Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo

    2012-01-01

    We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green’s function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1–9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126–140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.

  5. Non-Linear Behaviour Of Gelatin Networks Reveals A Hierarchical Structure

    KAUST Repository

    Yang, Zhi; Hemar, Yacine; Hilliou, loic; Gilbert, Elliot P.; McGillivray, Duncan James; Williams, Martin A. K.; Chaieb, Saharoui

    2015-01-01

    We investigate the strain hardening behaviour of various gelatin networks - namely physically-crosslinked gelatin gel, chemically-crosslinked gelatin gels, and a hybrid gels made of a combination of the former two - under large shear deformations using the pre-stress, strain ramp, and large amplitude oscillation shear protocols. Further, the internal structures of physically-crosslinked gelatin gel and chemically-crosslinked gelatin gels were characterized by small angle neutron scattering (SANS) to enable their internal structures to be correlated with their nonlinear rheology. The Kratky plots of SANS data demonstrate the presence of small cross-linked aggregates within the chemically-crosslinked network, whereas in the physically-crosslinked gels a relatively homogeneous structure is observed. Through model fitting to the scattering data, we were able to obtain structural parameters, such as correlation length (ξ), cross-sectional polymer chain radius (Rc), and the fractal dimension (df) of the gel networks. The fractal dimension df obtained from the SANS data of the physically-crosslinked and chemically crosslinked gels is 1.31 and 1.53, respectively. These values are in excellent agreement with the ones obtained from a generalized non-linear elastic theory we used to fit our stress-strain curves. The chemical crosslinking that generates coils and aggregates hinders the free stretching of the triple helices bundles in the physically-crosslinked gels.

  6. Non-Linear Behaviour Of Gelatin Networks Reveals A Hierarchical Structure

    KAUST Repository

    Yang, Zhi

    2015-12-14

    We investigate the strain hardening behaviour of various gelatin networks - namely physically-crosslinked gelatin gel, chemically-crosslinked gelatin gels, and a hybrid gels made of a combination of the former two - under large shear deformations using the pre-stress, strain ramp, and large amplitude oscillation shear protocols. Further, the internal structures of physically-crosslinked gelatin gel and chemically-crosslinked gelatin gels were characterized by small angle neutron scattering (SANS) to enable their internal structures to be correlated with their nonlinear rheology. The Kratky plots of SANS data demonstrate the presence of small cross-linked aggregates within the chemically-crosslinked network, whereas in the physically-crosslinked gels a relatively homogeneous structure is observed. Through model fitting to the scattering data, we were able to obtain structural parameters, such as correlation length (ξ), cross-sectional polymer chain radius (Rc), and the fractal dimension (df) of the gel networks. The fractal dimension df obtained from the SANS data of the physically-crosslinked and chemically crosslinked gels is 1.31 and 1.53, respectively. These values are in excellent agreement with the ones obtained from a generalized non-linear elastic theory we used to fit our stress-strain curves. The chemical crosslinking that generates coils and aggregates hinders the free stretching of the triple helices bundles in the physically-crosslinked gels.

  7. New Implicit General Linear Method | Ibrahim | Journal of the ...

    African Journals Online (AJOL)

    A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.

  8. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  9. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  10. Greater expectations: using hierarchical linear modeling to examine expectancy for treatment outcome as a predictor of treatment response.

    Science.gov (United States)

    Price, Matthew; Anderson, Page; Henrich, Christopher C; Rothbaum, Barbara Olasov

    2008-12-01

    A client's expectation that therapy will be beneficial has long been considered an important factor contributing to therapeutic outcomes, but recent empirical work examining this hypothesis has primarily yielded null findings. The present study examined the contribution of expectancies for treatment outcome to actual treatment outcome from the start of therapy through 12-month follow-up in a clinical sample of individuals (n=72) treated for fear of flying with either in vivo exposure or virtual reality exposure therapy. Using a piecewise hierarchical linear model, outcome expectancy predicted treatment gains made during therapy but not during follow-up. Compared to lower levels, higher expectations for treatment outcome yielded stronger rates of symptom reduction from the beginning to the end of treatment on 2 standardized self-report questionnaires on fear of flying. The analytic approach of the current study is one potential reason that findings contrast with prior literature. The advantages of using hierarchical linear modeling to assess interindividual differences in longitudinal data are discussed.

  11. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  12. Neural Generalized Predictive Control of a non-linear Process

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...

  13. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Science.gov (United States)

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  14. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  15. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  16. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  17. Testing for one Generalized Linear Single Order Parameter

    DEFF Research Database (Denmark)

    Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe

    We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation...... responses or extrapolate from measurements of a glassy state away from equilibrium. Starting from a master equation description of inherent dynamics, we calculate the complex thermodynamic response functions. We device a way of testing for the generalized single order parameter model by measuring 3 complex...

  18. Dynamic generalized linear models for monitoring endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  19. Linear relativistic gyrokinetic equation in general magnetically confined plasmas

    International Nuclear Information System (INIS)

    Tsai, S.T.; Van Dam, J.W.; Chen, L.

    1983-08-01

    The gyrokinetic formalism for linear electromagnetic waves of arbitrary frequency in general magnetic-field configurations is extended to include full relativistic effects. The derivation employs the small adiabaticity parameter rho/L 0 where rho is the Larmor radius and L 0 the equilibrium scale length. The effects of the plasma and magnetic field inhomogeneities and finite Larmor-radii effects are also contained

  20. A general method for enclosing solutions of interval linear equations

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2012-01-01

    Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012

  1. General treatment of a non-linear gauge condition

    International Nuclear Information System (INIS)

    Malleville, C.

    1982-06-01

    A non linear gauge condition is presented in the frame of a non abelian gauge theory broken with the Higgs mechanism. It is shown that this condition already introduced for the standard SU(2) x U(1) model can be generalized for any gauge model with the same type of simplification, namely the suppression of any coupling of the form: massless gauge boson, massive gauge boson, unphysical Higgs [fr

  2. Canonical perturbation theory in linearized general relativity theory

    International Nuclear Information System (INIS)

    Gonzales, R.; Pavlenko, Yu.G.

    1986-01-01

    Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field

  3. Electromagnetic axial anomaly in a generalized linear sigma model

    Science.gov (United States)

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  4. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de; Kühn, Oliver [Institute of Physics, Rostock University, Universitätsplatz 3, 18055 Rostock (Germany)

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  5. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    International Nuclear Information System (INIS)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver

    2015-01-01

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom

  6. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.

    2007-01-01

    faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Conclusions: Different......)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Methods: Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  7. Stochastic lumping analysis for linear kinetics and its application to the fluctuation relations between hierarchical kinetic networks

    Energy Technology Data Exchange (ETDEWEB)

    Deng, De-Ming; Chang, Cheng-Hung [Institute of Physics, National Chiao Tung University, Hsinchu 300, Taiwan (China)

    2015-05-14

    Conventional studies of biomolecular behaviors rely largely on the construction of kinetic schemes. Since the selection of these networks is not unique, a concern is raised whether and under which conditions hierarchical schemes can reveal the same experimentally measured fluctuating behaviors and unique fluctuation related physical properties. To clarify these questions, we introduce stochasticity into the traditional lumping analysis, generalize it from rate equations to chemical master equations and stochastic differential equations, and extract the fluctuation relations between kinetically and thermodynamically equivalent networks under intrinsic and extrinsic noises. The results provide a theoretical basis for the legitimate use of low-dimensional models in the studies of macromolecular fluctuations and, more generally, for exploring stochastic features in different levels of contracted networks in chemical and biological kinetic systems.

  8. Stochastic lumping analysis for linear kinetics and its application to the fluctuation relations between hierarchical kinetic networks.

    Science.gov (United States)

    Deng, De-Ming; Chang, Cheng-Hung

    2015-05-14

    Conventional studies of biomolecular behaviors rely largely on the construction of kinetic schemes. Since the selection of these networks is not unique, a concern is raised whether and under which conditions hierarchical schemes can reveal the same experimentally measured fluctuating behaviors and unique fluctuation related physical properties. To clarify these questions, we introduce stochasticity into the traditional lumping analysis, generalize it from rate equations to chemical master equations and stochastic differential equations, and extract the fluctuation relations between kinetically and thermodynamically equivalent networks under intrinsic and extrinsic noises. The results provide a theoretical basis for the legitimate use of low-dimensional models in the studies of macromolecular fluctuations and, more generally, for exploring stochastic features in different levels of contracted networks in chemical and biological kinetic systems.

  9. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  10. A Graphical User Interface to Generalized Linear Models in MATLAB

    Directory of Open Access Journals (Sweden)

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  11. General mirror pairs for gauged linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  12. General mirror pairs for gauged linear sigma models

    International Nuclear Information System (INIS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  13. Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems

    Directory of Open Access Journals (Sweden)

    Xinbo Zhang

    2014-01-01

    Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.

  14. Generalized space and linear momentum operators in quantum mechanics

    International Nuclear Information System (INIS)

    Costa, Bruno G. da; Borges, Ernesto P.

    2014-01-01

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p ^ q , and its canonically conjugate deformed position operator x ^ q . A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed

  15. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  16. The Effects Of Gender, Engineering Identification, and Engineering Program Expectancy On Engineering Career Intentions: Applying Hierarchical Linear Modeling (HLM) In Engineering Education Research

    Science.gov (United States)

    Tendhar, Chosang; Paretti, Marie C.; Jones, Brett D.

    2017-01-01

    This study had three purposes and four hypotheses were tested. Three purposes: (1) To use hierarchical linear modeling (HLM) to investigate whether students' perceptions of their engineering career intentions changed over time; (2) To use HLM to test the effects of gender, engineering identification (the degree to which an individual values a…

  17. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  18. Using Hierarchical Linear Modeling to Examine How Individual SLPs Differentially Contribute to Children's Language and Literacy Gains in Public Schools.

    Science.gov (United States)

    Farquharson, Kelly; Tambyraja, Sherine R; Logan, Jessica; Justice, Laura M; Schmitt, Mary Beth

    2015-08-01

    The purpose of this study was twofold: (a) to determine the unique contributions in children's language and literacy gains, over 1 academic year, that are attributable to the individual speech-language pathologist (SLP) and (b) to explore possible child- and SLP-level factors that may further explain SLPs' contributions to children's language and literacy gains. Participants were 288 kindergarten and 1st-grade children with language impairment who were currently receiving school-based language intervention from SLPs. Using hierarchical linear modeling, we partitioned the variance in children's gains in language (i.e., grammar, vocabulary) and literacy (i.e., word decoding) that could be attributed to their individual SLP. Results revealed a significant contribution of individual SLPs to children's gains in grammar, vocabulary, and word decoding. Children's fall language scores and grade were significant predictors of SLPs' contributions, although no SLP-level predictors were significant. The present study makes a first step toward incorporating implementation science and suggests that, for children receiving school-based language intervention, variance in child language and literacy gains in an academic year is at least partially attributable to SLPs. Continued work in this area should examine the possible SLP-level characteristics that may further explicate the relative contributions of SLPs.

  19. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  20. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  1. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  2. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua

    2010-06-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  3. The linearized inversion of the generalized interferometric multiple imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-06

    The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.

  4. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.

    2010-01-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  5. Solving Fully Fuzzy Linear System of Equations in General Form

    Directory of Open Access Journals (Sweden)

    A. Yousefzadeh

    2012-06-01

    Full Text Available In this work, we propose an approach for computing the positive solution of a fully fuzzy linear system where the coefficient matrix is a fuzzy $nimes n$ matrix. To do this, we use arithmetic operations on fuzzy numbers that introduced by Kaffman in and convert the fully fuzzy linear system into two $nimes n$ and $2nimes 2n$ crisp linear systems. If the solutions of these linear systems don't satisfy in positive fuzzy solution condition, we introduce the constrained least squares problem to obtain optimal fuzzy vector solution by applying the ranking function in given fully fuzzy linear system. Using our proposed method, the fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  6. New insights into the nature of cerebellar-dependent eyeblink conditioning deficits in schizophrenia: A hierarchical linear modeling approach

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    2016-01-01

    Full Text Available Evidence of cerebellar dysfunction in schizophrenia has mounted over the past several decades, emerging from neuroimaging, neuropathological, and behavioral studies. Consistent with these findings, cerebellar-dependent delay eyeblink conditioning (dEBC deficits have been identified in schizophrenia. While repeated measures analysis of variance (ANOVA is traditionally used to analyze dEBC data, hierarchical linear modeling (HLM more reliably describes change over time by accounting for the dependence in repeated measures data. This analysis approach is well suited to dEBC data analysis because it has less restrictive assumptions and allows unequal variances. The current study examined dEBC measured with electromyography in a single-cue tone paradigm in an age-matched sample of schizophrenia participants and healthy controls (N=56 per group using HLM. Subjects participated in 90 trials (10 blocks of dEBC, during which a 400 ms tone co-terminated with a 50 ms air puff delivered to the left eye. Each block also contained 1 tone-alone trial. The resulting block averages of dEBC data were fitted to a 3-parameter logistic model in HLM, revealing significant differences between schizophrenia and control groups on asymptote and inflection point, but not slope. These findings suggest that while the learning rate is not significantly different compared to controls, associative learning begins to level off later and a lower ultimate level of associative learning is achieved in schizophrenia. Given the large sample size in the present study, HLM may provide a more nuanced and definitive analysis of differences between schizophrenia and controls on dEBC.

  7. Cavity characterization for general use in linear electron accelerators

    International Nuclear Information System (INIS)

    Souza Neto, M.V. de.

    1985-01-01

    The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt

  8. Setting a generalized functional linear model (GFLM for the classification of different types of cancer

    Directory of Open Access Journals (Sweden)

    Miguel Flores

    2016-11-01

    Full Text Available This work aims to classify the DNA sequences of healthy and malignant cancer respectively. For this, supervised and unsupervised classification methods from a functional context are used; i.e. each strand of DNA is an observation. The observations are discretized, for that reason different ways to represent these observations with functions are evaluated. In addition, an exploratory study is done: estimating the mean and variance of each functional type of cancer. For the unsupervised classification method, hierarchical clustering with different measures of functional distance is used. On the other hand, for the supervised classification method, a functional generalized linear model is used. For this model the first and second derivatives are used which are included as discriminating variables. It has been verified that one of the advantages of working in the functional context is to obtain a model to correctly classify cancers by 100%. For the implementation of the methods it has been used the fda.usc R package that includes all the techniques of functional data analysis used in this work. In addition, some that have been developed in recent decades. For more details of these techniques can be consulted Ramsay, J. O. and Silverman (2005 and Ferraty et al. (2006.

  9. Hierarchical Control of Droop-Controlled DC and AC Microgrids - A General Approach Towards Standardization

    DEFF Research Database (Denmark)

    Guerrero, Josep M.; Vásquez, Juan V.; Teodorescu, Remus

    2009-01-01

    DC and AC Microgrids are key elements to integrate renewable and distributed energy resources as well as distributed energy storage systems. In the last years, efforts toward the standardization of these Microgrids have been made. In this sense, this paper present the hierarchical control derived...

  10. A note on hierarchical hubbing for a generalization of the VPN problem

    NARCIS (Netherlands)

    N.K. Olver (Neil)

    2014-01-01

    htmlabstractRobust network design refers to a class of optimization problems that occur when designing networks to efficiently handle variable demands. The notion of "hierarchical hubbing" was introduced (in the narrow context of a specific robust network design question), by Olver and Shepherd

  11. A note on hierarchical hubbing for a generalization of the VPN problem

    NARCIS (Netherlands)

    N.K. Olver (Neil)

    2016-01-01

    textabstractRobust network design refers to a class of optimization problems that occur when designing networks to efficiently handle variable demands. In this context, Fréchette et al. (2013) recently explored hierarchical hubbing: a routing strategy involving a multiplicity of "hubs" connected to

  12. A note on hierarchical hubbing for a generalization of the VPN problem

    NARCIS (Netherlands)

    Olver, N.K.

    2014-01-01

    Robust network design refers to a class of optimization problems that occur when designing networks to efficiently handle variable demands. The notion of "hierarchical hubbing" was introduced (in the narrow context of a specific robust network design question), by Olver and Shepherd [2010].

  13. A note on hierarchical hubbing for a generalization of the VPN problem

    NARCIS (Netherlands)

    Olver, Neil

    2016-01-01

    Robust network design refers to a class of optimization problems that occur when designing networks to efficiently handle variable demands. In this context, Fréchette et al. (2013) recently explored hierarchical hubbing: a routing strategy involving a multiplicity of "hubs" connected to terminals

  14. Generalization of the linear algebraic method to three dimensions

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1991-01-01

    We present a numerical method for the solution of the Lippmann-Schwinger equation for electron-molecule collisions. By performing a three-dimensional numerical quadrature, this approach avoids both a basis-set representation of the wave function and a partial-wave expansion of the scattering potential. The resulting linear equations, analogous in form to the one-dimensional linear algebraic method, are solved with the direct iteration-variation method. Several numerical examples are presented. The prospect for using this numerical quadrature scheme for electron-polyatomic molecules is discussed

  15. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Directory of Open Access Journals (Sweden)

    Shangli Zhang

    2009-01-01

    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  16. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    Science.gov (United States)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  17. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss

  18. General guidelines solution for linear programming with fuzzy coefficients

    Directory of Open Access Journals (Sweden)

    Sergio Gerardo de los Cobos Silva

    2013-08-01

    Full Text Available This work introduce to the Possibilistic Programming and the Fuzzy Programming as paradigms that allow to resolve problems of linear programming when the coefficients of the model or the restrictions on the same are presented as fuzzy numbers, rather than exact numbers (crisp. This work presents some examples based on [1].

  19. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  20. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.

    Science.gov (United States)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J

    2017-09-29

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  1. A General Synthesis Strategy for Hierarchical Porous Metal Oxide Hollow Spheres

    Directory of Open Access Journals (Sweden)

    Huadong Fu

    2015-01-01

    Full Text Available The hierarchical porous TiO2 hollow spheres were successfully prepared by using the hydrothermally synthesized colloidal carbon spheres as templates and tetrabutyl titanate as inorganic precursors. The diameter and wall thickness of hollow TiO2 spheres were determined by the hard templates and concentration of tetrabutyl titanate. The particle size, dispersity, homogeneity, and surface state of the carbon spheres can be easily controlled by adjusting the hydrothermal conditions and adding certain amount of the surfactants. The prepared hollow spheres possessed the perfect spherical shape, monodispersity, and hierarchically pore structures, and the further experiment verified that the present approach can be used to prepare other metal oxide hollow spheres, which could be used as catalysis, fuel cells, lithium-air battery, gas sensor, and so on.

  2. Dark energy cosmology with generalized linear equation of state

    International Nuclear Information System (INIS)

    Babichev, E; Dokuchaev, V; Eroshenko, Yu

    2005-01-01

    Dark energy with the usually used equation of state p = wρ, where w const 0 ), where the constants α and ρ 0 are free parameters. This non-homogeneous linear equation of state provides the description of both hydrodynamically stable (α > 0) and unstable (α < 0) fluids. In particular, the considered cosmological model describes the hydrodynamically stable dark (and phantom) energy. The possible types of cosmological scenarios in this model are determined and classified in terms of attractors and unstable points by using phase trajectories analysis. For the dark energy case, some distinctive types of cosmological scenarios are possible: (i) the universe with the de Sitter attractor at late times, (ii) the bouncing universe, (iii) the universe with the big rip and with the anti-big rip. In the framework of a linear equation of state the universe filled with a phantom energy, w < -1, may have either the de Sitter attractor or the big rip

  3. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  4. Generalizing a categorization of students' interpretations of linear kinematics graphs

    OpenAIRE

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and ...

  5. Generalizing a categorization of students’ interpretations of linear kinematics graphs

    Directory of Open Access Journals (Sweden)

    Laurens Bollen

    2016-02-01

    Full Text Available We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven and the Basque Country, Spain (University of the Basque Country. We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  6. Generalizing a categorization of students' interpretations of linear kinematics graphs

    Science.gov (United States)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  7. General solutions of second-order linear difference equations of Euler type

    Directory of Open Access Journals (Sweden)

    Akane Hongyo

    2017-01-01

    Full Text Available The purpose of this paper is to give general solutions of linear difference equations which are related to the Euler-Cauchy differential equation \\(y^{\\prime\\prime}+(\\lambda/t^2y=0\\ or more general linear differential equations. We also show that the asymptotic behavior of solutions of the linear difference equations are similar to solutions of the linear differential equations.

  8. Mental and physical health correlates among family caregivers of patients with newly-diagnosed incurable cancer: a hierarchical linear regression analysis.

    Science.gov (United States)

    Shaffer, Kelly M; Jacobs, Jamie M; Nipp, Ryan D; Carr, Alaina; Jackson, Vicki A; Park, Elyse R; Pirl, William F; El-Jawahri, Areej; Gallagher, Emily R; Greer, Joseph A; Temel, Jennifer S

    2017-03-01

    Caregiver, relational, and patient factors have been associated with the health of family members and friends providing care to patients with early-stage cancer. Little research has examined whether findings extend to family caregivers of patients with incurable cancer, who experience unique and substantial caregiving burdens. We examined correlates of mental and physical health among caregivers of patients with newly-diagnosed incurable lung or non-colorectal gastrointestinal cancer. At baseline for a trial of early palliative care, caregivers of participating patients (N = 275) reported their mental and physical health (Medical Outcome Survey-Short Form-36); patients reported their quality of life (Functional Assessment of Cancer Therapy-General). Analyses used hierarchical linear regression with two-tailed significance tests. Caregivers' mental health was worse than the U.S. national population (M = 44.31, p caregiver, relational, and patient factors simultaneously revealed that younger (B = 0.31, p = .001), spousal caregivers (B = -8.70, p = .003), who cared for patients reporting low emotional well-being (B = 0.51, p = .01) reported worse mental health; older (B = -0.17, p = .01) caregivers with low educational attainment (B = 4.36, p family caregivers of patients with incurable cancer, caregiver demographics, relational factors, and patient-specific factors were all related to caregiver mental health, while caregiver demographics were primarily associated with caregiver physical health. These findings help identify characteristics of family caregivers at highest risk of poor mental and physical health who may benefit from greater supportive care.

  9. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  10. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  11. Generalized Heisenberg algebra and (non linear) pseudo-bosons

    Science.gov (United States)

    Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.

    2018-04-01

    We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.

  12. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    Energy Technology Data Exchange (ETDEWEB)

    Yavari, M., E-mail: yavari@iaukashan.ac.ir [Islamic Azad University, Kashan Branch (Iran, Islamic Republic of)

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  13. Linear-time general decoding algorithm for the surface code

    Science.gov (United States)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  14. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  15. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  16. The Morava E-theories of finite general linear groups

    Science.gov (United States)

    Mattafirri, Sara

    block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.

  17. Spatio-temporal map generalizations with the hierarchical Voronoi data structure

    DEFF Research Database (Denmark)

    Mioc, Darka; Anton, François; Gold, Christopher M.

    implemented in commercial GIS systems. In this research, we used the Voronoi spatial data model for map generalizations. We were able to demonstrate that the map generalization does not affect only spatial objects (points, lines or polygons), but also the events corresponding to the creation and modification...... their spatio-temporal characteristics and their dynamic behaviour....

  18. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  19. Sensitivity theory for general non-linear algebraic equations with constraints

    International Nuclear Information System (INIS)

    Oblow, E.M.

    1977-04-01

    Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems

  20. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  1. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  2. Contact symmetries of general linear second-order ordinary differential equations: letter to the editor

    NARCIS (Netherlands)

    Martini, Ruud; Kersten, P.H.M.

    1983-01-01

    Using 1-1 mappings, the complete symmetry groups of contact transformations of general linear second-order ordinary differential equations are determined from two independent solutions of those equations, and applied to the harmonic oscillator with and without damping.

  3. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  4. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Directory of Open Access Journals (Sweden)

    Maria Åström

    2012-06-01

    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  5. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  6. A hierarchical generalization of the acoustic reciprocity theorem involving higher-order derivatives and interaction quantities.

    Science.gov (United States)

    Lin, Ju; Li, Jie; Li, Xiaolei; Wang, Ning

    2016-10-01

    An acoustic reciprocity theorem is generalized, for a smoothly varying perturbed medium, to a hierarchy of reciprocity theorems including higher-order derivatives of acoustic fields. The standard reciprocity theorem is the first member of the hierarchy. It is shown that the conservation of higher-order interaction quantities is related closely to higher-order derivative distributions of perturbed media. Then integral reciprocity theorems are obtained by applying Gauss's divergence theorem, which give explicit integral representations connecting higher-order interactions and higher-order derivative distributions of perturbed media. Some possible applications to an inverse problem are also discussed.

  7. Hierarchical multiscale modeling for flows in fractured media using generalized multiscale finite element method

    KAUST Repository

    Efendiev, Yalchin R.

    2015-06-05

    In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.

  8. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  9. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-01-01

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  10. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  11. Hierarchically organized architecture of potassium hydrogen phthalate and poly(acrylic acid): toward a general strategy for biomimetic crystal design.

    Science.gov (United States)

    Oaki, Yuya; Imai, Hiroaki

    2005-12-28

    A hierarchically organized architecture in multiple scales was generated from potassium hydrogen phthalate crystals and poly(acrylic acid) based on our novel biomimetic approach with an exquisite association of polymers on crystallization.

  12. Generalized linear elastic fracture mechanics: an application to a crack touching the bimaterial interface

    Czech Academy of Sciences Publication Activity Database

    Náhlík, Luboš; Šestáková, L.; Hutař, Pavel; Knésl, Zdeněk

    2011-01-01

    Roč. 452-453, - (2011), s. 445-448 ISSN 1013-9826 R&D Projects: GA AV ČR(CZ) KJB200410803; GA ČR GA101/09/1821 Institutional research plan: CEZ:AV0Z20410507 Keywords : generalized stress intensity factor * bimaterial interface * composite materials * strain energy density factor * fracture criterion * generalized linear elastic fracture mechanics Subject RIV: JL - Materials Fatigue, Friction Mechanics

  13. Generalized multivariate Fokker-Planck equations derived from kinetic transport theory and linear nonequilibrium thermodynamics

    International Nuclear Information System (INIS)

    Frank, T.D.

    2002-01-01

    We study many particle systems in the context of mean field forces, concentration-dependent diffusion coefficients, generalized equilibrium distributions, and quantum statistics. Using kinetic transport theory and linear nonequilibrium thermodynamics we derive for these systems a generalized multivariate Fokker-Planck equation. It is shown that this Fokker-Planck equation describes relaxation processes, has stationary maximum entropy distributions, can have multiple stationary solutions and stationary solutions that differ from Boltzmann distributions

  14. The theory of a general quantum system interacting with a linear dissipative system

    International Nuclear Information System (INIS)

    Feynman, R.P.; Vernon, F.L.

    2000-01-01

    A formalism has been developed, using Feynman's space-time formulation of nonrelativistic quantum mechanics whereby the behavior of a system of interest, which is coupled to other external quantum systems, may be calculated in terms of its own variables only. It is shown that the effect of the external systems in such a formalism can always be included in a general class of functionals (influence functionals) of the coordinates of the system only. The properties of influence functionals for general systems are examined. Then, specific forms of influence functionals representing the effect of definite and random classical forces, linear dissipative systems at finite temperatures, and combinations of these are analyzed in detail. The linear system analysis is first done for perfectly linear systems composed of combinations of harmonic oscillators, loss being introduced by continuous distributions of oscillators. Then approximately linear systems and restrictions necessary for the linear behavior are considered. Influence functionals for all linear systems are shown to have the same form in terms of their classical response functions. In addition, a fluctuation-dissipation theorem is derived relating temperature and dissipation of the linear system to a fluctuating classical potential acting on the system of interest which reduces to the Nyquist-Johnson relation for noise in the case of electric circuits. Sample calculations of transition probabilities for the spontaneous emission of an atom in free space and in a cavity are made. Finally, a theorem is proved showing that within the requirements of linearity all sources of noise or quantum fluctuation introduced by maser-type amplification devices are accounted for by a classical calculation of the characteristics of the maser

  15. On Extended Exponential General Linear Methods PSQ with S>Q ...

    African Journals Online (AJOL)

    This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...

  16. More on Generalizations and Modifications of Iterative Methods for Solving Large Sparse Indefinite Linear Systems

    Directory of Open Access Journals (Sweden)

    Jen-Yuan Chen

    2014-01-01

    Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.

  17. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  18. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  19. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  20. A generalized variational algebra and conserved densities for linear evolution equations

    International Nuclear Information System (INIS)

    Abellanas, L.; Galindo, A.

    1978-01-01

    The symbolic algebra of Gel'fand and Dikii is generalized to the case of n variables. Using this algebraic approach a rigorous characterization of the polynomial kernel of the variational derivative is given. This is applied to classify all the conservation laws for linear polynomial evolution equations of arbitrary order. (Auth.)

  1. A differential-geometric approach to generalized linear models with grouped predictors

    NARCIS (Netherlands)

    Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.

    We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important

  2. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    Science.gov (United States)

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  3. Analyzing Economic Attainment Patterns of Foreign Born Latin American Male Immigrants to The United States: an Example Using Hierarchical Linear Modeling

    Directory of Open Access Journals (Sweden)

    David J. Gotcher

    2001-09-01

    Full Text Available The paper presents the research which examines and endeavors to account for variation in the economic attainments of immigrants to the United States from Latin America, through the use of Hierarchical Linear Modeling. When analyzing this variation, researchers typically choose between two competing explanations. Human capital theory contends that variation in economic attainment is a product of different characteristics of individuals. Social capital theory contends that variation in economic attainment is a product of differences in characteristics of the societies from which the workers come. The author's central thesis is that we need not choose between human and social capital theories, that we can rely on both theoretical approaches, that it is an empirical and not a theoretical question how much variation can be explained by one set of factors versus the other. The real problem then is to build an appropriate methodology that allows us to partition the variation in economic attainments, identifying how much is explained by individual and how much by group characteristics. Using a multi-level modeling technique, this research presents such a methodology.

  4. Preoperative factors affecting cost and length of stay for isolated off-pump coronary artery bypass grafting: hierarchical linear model analysis.

    Science.gov (United States)

    Shinjo, Daisuke; Fushimi, Kiyohide

    2015-11-17

    To determine the effect of preoperative patient and hospital factors on resource use, cost and length of stay (LOS) among patients undergoing off-pump coronary artery bypass grafting (OPCAB). Observational retrospective study. Data from the Japanese Administrative Database. Patients who underwent isolated, elective OPCAB between April 2011 and March 2012. The primary outcomes of this study were inpatient cost and LOS associated with OPCAB. A two-level hierarchical linear model was used to examine the effects of patient and hospital characteristics on inpatient costs and LOS. The independent variables were patient and hospital factors. We identified 2491 patients who underwent OPCAB at 268 hospitals. The mean cost of OPCAB was $40 665 ±7774, and the mean LOS was 23.4±8.2 days. The study found that select patient factors and certain comorbidities were associated with a high cost and long LOS. A high hospital OPCAB volume was associated with a low cost (-6.6%; p=0.024) as well as a short LOS (-17.6%, pcost and LOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. Subject-Verb Agreement in Children and Adults: Serial or Hierarchical Processing?

    Science.gov (United States)

    Negro, Isabelle; Chanquoy, Lucile; Fayol, Michel; Louis-Sidney, Maryse

    2005-01-01

    Two processes, serial and hierarchical, are generally opposed to account for grammatical encoding in language production. In a developmental perspective, the question addressed here is whether the subject-verb agreement during writing is computed serially, once the words are linearly ordered in the sentence, or hierarchically, as soon as the…

  6. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  7. Modulated molecular beam mass spectrometry: A generalized expression for the ''reaction product vector'' for linear systems

    International Nuclear Information System (INIS)

    Chang, H.; Weinberg, W.H.

    1977-01-01

    A generalized expression is developed that relates the ''reaction product vector'', epsilon exp(-iphi), to the kinetic parameters of a linear system. The formalism is appropriate for the analysis of modulated molecular beam mass spectrometry data and facilitates the correlation of experimental results to (proposed) linear models. A study of stability criteria appropriate for modulated molecular beam mass spectrometry experiments is also presented. This investigation has led to interesting inherent limitations which have not heretofore been emphasized, as well as a delineation of the conditions under which stable chemical oscillations may occur in the reacting system

  8. An analogue of Morse theory for planar linear networks and the generalized Steiner problem

    International Nuclear Information System (INIS)

    Karpunin, G A

    2000-01-01

    A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set

  9. Linear electrical circuits. Definitions - General theorems; Circuits electriques lineaires. Definitions - Theoremes generaux

    Energy Technology Data Exchange (ETDEWEB)

    Escane, J.M. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)

    2005-04-01

    The first part of this article defines the different elements of an electrical network and the models to represent them. Each model involves the current and the voltage as a function of time. Models involving time functions are simple but their use is not always easy. The Laplace transformation leads to a more convenient form where the variable is no more directly the time. This transformation leads also to the notion of transfer function which is the object of the second part. The third part aims at defining the fundamental operation rules of linear networks, commonly named 'general theorems': linearity principle and superimposition theorem, duality principle, Thevenin theorem, Norton theorem, Millman theorem, triangle-star and star-triangle transformations. These theorems allow to study complex power networks and to simplify the calculations. They are based on hypotheses, the first one is that all networks considered in this article are linear. (J.S.)

  10. A general digital computer procedure for synthesizing linear automatic control systems

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1961-10-01

    The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)

  11. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.

    2012-03-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.

  12. Generalized prolate spheroidal wave functions for optical finite fractional Fourier and linear canonical transforms.

    Science.gov (United States)

    Pei, Soo-Chang; Ding, Jian-Jiun

    2005-03-01

    Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.

  13. Linear relations in microbial reaction systems: a general overview of their origin, form, and use.

    Science.gov (United States)

    Noorman, H J; Heijnen, J J; Ch A M Luyben, K

    1991-09-01

    In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable.

  14. Use of multivariate extensions of generalized linear models in the analysis of data from clinical trials

    OpenAIRE

    ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose

    2002-01-01

    In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...

  15. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  17. Synthesis of general linear networks using causal and J-isometric dilations

    International Nuclear Information System (INIS)

    D'Attellis, C.E.

    1977-06-01

    The problem of the synthesis of linear systems characterized by their scattering operator is studied. This problem is considered solved once an adequate dilation for the operator is obtained, from which the synthesis is performed following the method of Saeks (35) and Levan (19). Known results appear sistematized and generalized in this paper, obtaining an unique method of synthesis for different caterories of operators. (Author) [es

  18. A General Construction of Linear Differential Equations with Solutions of Prescribed Properties

    Czech Academy of Sciences Publication Activity Database

    Neuman, František

    2004-01-01

    Roč. 17, č. 1 (2004), s. 71-76 ISSN 0893-9659 R&D Projects: GA AV ČR IAA1019902; GA ČR GA201/99/0295 Institutional research plan: CEZ:AV0Z1019905 Keywords : construction of linear differential equations * prescribed qualitative properties of solutions Subject RIV: BA - General Mathematics Impact factor: 0.414, year: 2004

  19. Interactions in Generalized Linear Models: Theoretical Issues and an Application to Personal Vote-Earning Attributes

    Directory of Open Access Journals (Sweden)

    Tsung-han Tsai

    2013-05-01

    Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.

  20. Path integral solution of linear second order partial differential equations I: the general construction

    International Nuclear Information System (INIS)

    LaChapelle, J.

    2004-01-01

    A path integral is presented that solves a general class of linear second order partial differential equations with Dirichlet/Neumann boundary conditions. Elementary kernels are constructed for both Dirichlet and Neumann boundary conditions. The general solution can be specialized to solve elliptic, parabolic, and hyperbolic partial differential equations with boundary conditions. This extends the well-known path integral solution of the Schroedinger/diffusion equation in unbounded space. The construction is based on a framework for functional integration introduced by Cartier/DeWitt-Morette

  1. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    Directory of Open Access Journals (Sweden)

    Mingwu Jin

    2012-01-01

    Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  2. Unpacking the Gender Gap in Postsecondary Participation among African Americans and Caucasians Using Hierarchical Generalized Linear Modeling

    Science.gov (United States)

    Tekleselassie, Abebayehu; Mallery, Coretta; Choi, Jaehwa

    2013-01-01

    National reports recognize a growing gender gap in postsecondary enrollment as a major challenge impacting the lives of young men, particularly African Americans. Previous gender and race specific research is largely inconclusive. It is, for example, unclear from previous research how persistent the gender gap is across various school contexts,…

  3. Assessing Validity of Measurement in Learning Disabilities Using Hierarchical Generalized Linear Modeling: The Roles of Anxiety and Motivation

    Science.gov (United States)

    Sideridis, Georgios D.

    2016-01-01

    The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…

  4. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    Science.gov (United States)

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  5. Study on sampling of continuous linear system based on generalized Fourier transform

    Science.gov (United States)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  6. Multifunctional substrate of Al alloy based on general hierarchical micro/nanostructures: superamphiphobicity and enhanced corrosion resistance

    OpenAIRE

    Li, Xuewu; Shi, Tian; Liu, Cong; Zhang, Qiaoxin; Huang, Xingjiu

    2016-01-01

    Aluminum alloys are vulnerable to penetrating and peeling failures in seawater and preparing a barrier coating to isolate the substrate from corrosive medium is an effective anticorrosion method. Inspired by the lotus leaves effect, a wetting alloy surface with enhanced anticorrosion behavior has been prepared via etch, deposition, and low-surface-energy modification. Results indicate that excellent superamphiphobicity has been achieved after the modification of the constructed hierarchical l...

  7. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  8. The potential in general linear electrodynamics. Causal structure, propagators and quantization

    Energy Technology Data Exchange (ETDEWEB)

    Siemssen, Daniel [Department of Mathematical Methods in Physics, Faculty of Physics, University of Warsaw (Poland); Pfeifer, Christian [Institute for Theoretical Physics, Leibniz Universitaet Hannover (Germany); Center of Applied Space Technology and Microgravity (ZARM), Universitaet Bremen (Germany)

    2016-07-01

    From an axiomatic point of view, the fundamental input for a theory of electrodynamics are Maxwell's equations dF=0 (or F=dA) and dH=J, and a constitutive law H=F, which relates the field strength 2-form F and the excitation 2-form H. In this talk we consider general linear electrodynamics, the theory of electrodynamics defined by a linear constitutive law. The best known application of this theory is the effective description of electrodynamics inside (linear) media (e.g. birefringence). We analyze the classical theory of the electromagnetic potential A before we use methods familiar from mathematical quantum field theory in curved spacetimes to quantize it. Our analysis of the classical theory contains the derivation of retarded and advanced propagators, the analysis of the causal structure on the basis of the constitutive law (instead of a metric) and a discussion of the classical phase space. This classical analysis sets the stage for the construction of the quantum field algebra and quantum states, including a (generalized) microlocal spectrum condition.

  9. Non-cooperative stochastic differential game theory of generalized Markov jump linear systems

    CERN Document Server

    Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning

    2017-01-01

    This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...

  10. Coupling of linearized gravity to nonrelativistic test particles: Dynamics in the general laboratory frame

    International Nuclear Information System (INIS)

    Speliotopoulos, A.D.; Chiao, Raymond Y.

    2004-01-01

    The coupling of gravity to matter is explored in the linearized gravity limit. The usual derivation of gravity-matter couplings within the quantum-field-theoretic framework is reviewed. A number of inconsistencies between this derivation of the couplings and the known results of tidal effects on test particles according to classical general relativity are pointed out. As a step towards resolving these inconsistencies, a general laboratory frame fixed on the worldline of an observer is constructed. In this frame, the dynamics of nonrelativistic test particles in the linearized gravity limit is studied, and their Hamiltonian dynamics is derived. It is shown that for stationary metrics this Hamiltonian reduces to the usual Hamiltonian for nonrelativistic particles undergoing geodesic motion. For nonstationary metrics with long-wavelength gravitational waves present (GWs), it reduces to the Hamiltonian for a nonrelativistic particle undergoing geodesic deviation motion. Arbitrary-wavelength GWs couple to the test particle through a vector-potential-like field N a , the net result of the tidal forces that the GW induces in the system, namely, a local velocity field on the system induced by tidal effects, as seen by an observer in the general laboratory frame. Effective electric and magnetic fields, which are related to the electric and magnetic parts of the Weyl tensor, are constructed from N a that obey equations of the same form as Maxwell's equations. A gedankin gravitational Aharonov-Bohm-type experiment using N a to measure the interference of quantum test particles is presented

  11. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  12. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    Science.gov (United States)

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  13. A generalization of Dirac non-linear electrodynamics, and spinning charged particles

    International Nuclear Information System (INIS)

    Rodrigues Junior, W.A.; Vaz Junior, J.; Recami, E.

    1992-08-01

    The Dirac non-linear electrodynamics is generalized by introducing two potentials (namely, the vector potential a and the pseudo-vector potential γ 5 B of the electromagnetic theory with charges and magnetic monopoles), and by imposing the pseudoscalar part of the product W W * to BE zero, with W = A + γ 5 B. Also, is demonstrated that the field equations of such a theory posses a soliton-like solution which can represent a priori a charged particle. (L.C.J.A.)

  14. Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter

    International Nuclear Information System (INIS)

    Shukla, A.; Peter, M.; Hoffmann, L.

    1993-01-01

    Two new approaches are used to analyze positron annihilation lifetime spectra. A general linear filter is designed to filter the noise from lifetime data. The quantified maximum entropy method is used to solve the inverse problem of finding the lifetimes and intensities present in data. We determine optimal values of parameters needed for fitting using Bayesian methods. Estimates of errors are provided. We present results on simulated and experimental data with extensive tests to show the utility of this method and compare it with other existing methods. (orig.)

  15. General formulae for polarization observables in deuteron electrodisintegration and linear relations

    International Nuclear Information System (INIS)

    Arenhoevel, H.; Leidemann, W.; Tomusiak, E.L.

    1993-01-01

    Formal expressions are derived for all possible polarization observables in deuteron electrodisintegration with longitudinally polarized incoming electrons, oriented deuteron targets and polarization analysis of outgoing nucleons. They are given in terms of general structure functions which can be determined experimentally. These structure functions are Hermitean forms of the T-matrix elements which, in principle, allow the determination of all T-matrix elements up to an arbitrary common phase. Since the set of structure functions is overcomplete, linear relations among various structure functions exist which are derived explicitly

  16. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  17. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  18. Generalization of Asaoka method to linearly anisotropic scattering: benchmark data in cylindrical geometry

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-11-01

    The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr

  19. Vector generalized linear and additive models with an implementation in R

    CERN Document Server

    Yee, Thomas W

    2015-01-01

    This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable.    The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...

  20. Prospects of measuring general Higgs couplings at e{sup +}e{sup -} linear colliders

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, K. [KEK, Ibaraki (Japan). Theory Group; Ishihara, S. [KEK, Ibaraki (Japan). Theory Group; Department of Physics, Hyogo University of Education, 941-1 Shimokume, Yashiro, Kato, Hyogo 673-1494 (Japan); Kamoshita, J. [Department of Physics, Ochanomizu University, 2-1-1 Otsuka, Bunkyo, Tokyo 112-8610 (Japan); Kniehl, B.A. [II. Institut fuer Theoretische Physik, Universitaet Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany)

    2000-06-01

    We examine how accurately the general HZV couplings, with V=Z{gamma}, may be determined by studying e{sup +}e{sup -}{yields}Hf anti f processes at future e{sup +}e{sup -} linear colliders. By using the optimal-observable method, which makes use of all available experimental information, we find out which combinations of the various HZV coupling terms may be constrained most efficiently with high luminosity. We also assess the benefits of measuring the tau-lepton helicities, identifying the bottom-hadron charges, polarizing the electron beam and running at two different collider energies. The HZZ couplings are generally found to be well constrained, even without these options, while the HZ{gamma} couplings are not. The constraints on the latter may be significantly improved by beam polarization. (orig.)

  1. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  2. Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models

    Directory of Open Access Journals (Sweden)

    A. Cabezas

    2010-08-01

    Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m2 study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.

  3. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  4. Robust-BD Estimation and Inference for General Partially Linear Models

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2017-11-01

    Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.

  5. On the Generalization of the Timoshenko Beam Model Based on the Micropolar Linear Theory: Static Case

    Directory of Open Access Journals (Sweden)

    Andrea Nobili

    2015-01-01

    Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.

  6. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  7. Galaxy bias and non-linear structure formation in general relativity

    International Nuclear Information System (INIS)

    Baldauf, Tobias; Seljak, Uroš; Senatore, Leonardo; Zaldarriaga, Matias

    2011-01-01

    Length scales probed by the large scale structure surveys are becoming closer and closer to the horizon scale. Further, it has been recently understood that non-Gaussianity in the initial conditions could show up in a scale dependence of the bias of galaxies at the largest possible distances. It is therefore important to take General Relativistic effects into account. Here we provide a General Relativistic generalization of the bias that is valid both for Gaussian and for non-Gaussian initial conditions. The collapse of objects happens on very small scales, while long-wavelength modes are always in the quasi linear regime. Around every small collapsing region, it is therefore possible to find a reference frame that is valid for arbitrary times and where the space time is almost flat: the Fermi frame. Here the Newtonian approximation is applicable and the equations of motion are the ones of the standard N-body codes. The effects of long-wavelength modes are encoded in the mapping from the cosmological frame to the local Fermi frame. At the level of the linear bias, the effect of the long-wavelength modes on the dynamics of the short scales is all encoded in the local curvature of the Universe, which allows us to define a General Relativistic generalization of the bias in the standard Newtonian setting. We show that the bias due to this effect goes to zero as the square of the ratio between the physical wavenumber and the Hubble scale for modes longer than the horizon, confirming the intuitive picture that modes longer than the horizon do not have any dynamical effect. On the other hand, the bias due to non-Gaussianities does not need to vanish for modes longer than the Hubble scale, and for non-Gaussianities of the local kind it goes to a constant. As a further application of our setup, we show that it is not necessary to perform large N-body simulations to extract information about long-wavelength modes: N-body simulations can be done on small scales and long

  8. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  9. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    Science.gov (United States)

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  10. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  11. Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-01-01

    Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.

  12. Compact tunable silicon photonic differential-equation solver for general linear time-invariant systems.

    Science.gov (United States)

    Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai

    2014-10-20

    We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.

  13. Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models

    DEFF Research Database (Denmark)

    Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.

    2018-01-01

    current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...

  14. Generalized Partially Linear Regression with Misclassified Data and an Application to Labour Market Transitions

    DEFF Research Database (Denmark)

    Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf

    2017-01-01

    Large data sets that originate from administrative or operational activity are increasingly used for statistical analysis as they often contain very precise information and a large number of observations. But there is evidence that some variables can be subject to severe misclassification...... or contain missing values. Given the size of the data, a flexible semiparametric misclassification model would be good choice but their use in practise is scarce. To close this gap a semiparametric model for the probability of observing labour market transitions is estimated using a sample of 20 m...... observations from Germany. It is shown that estimated marginal effects of a number of covariates are sizeably affected by misclassification and missing values in the analysis data. The proposed generalized partially linear regression extends existing models by allowing a misclassified discrete covariate...

  15. A Fuzzy Approach Using Generalized Dinkelbach’s Algorithm for Multiobjective Linear Fractional Transportation Problem

    Directory of Open Access Journals (Sweden)

    Nurdan Cetin

    2014-01-01

    Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.

  16. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    Science.gov (United States)

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  17. Dynamic Average Consensus and Consensusability of General Linear Multiagent Systems with Random Packet Dropout

    Directory of Open Access Journals (Sweden)

    Wen-Min Zhou

    2013-01-01

    Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.

  18. Generalizing, optimizing, and inventing numerical algorithms for the fractional Fourier, Fresnel, and linear canonical transforms

    Science.gov (United States)

    Hennelly, Bryan M.; Sheridan, John T.

    2005-05-01

    By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.

  19. Nested and Hierarchical Archimax copulas

    KAUST Repository

    Hofert, Marius; Huser, Raphaë l; Prasad, Avinash

    2017-01-01

    The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

  20. Nested and Hierarchical Archimax copulas

    KAUST Repository

    Hofert, Marius

    2017-07-03

    The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

  1. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  2. Non-linear general instability of ring-stiffened conical shells under external hydrostatic pressure

    International Nuclear Information System (INIS)

    Ross, C T F; Kubelt, C; McLaughlin, I; Etheridge, A; Turner, K; Paraskevaides, D; Little, A P F

    2011-01-01

    The paper presents the experimental results for 15 ring-stiffened circular steel conical shells, which failed by non-linear general instability. The results of these investigations were compared with various theoretical analyses, including an ANSYS eigen buckling analysis and another ANSYS analysis; which involved a step-by-step method until collapse; where both material and geometrical nonlinearity were considered. The investigation also involved an analysis using BS5500 (PD 5500), together with the method of Ross of the University of Portsmouth. The ANSYS eigen buckling analysis tended to overestimate the predicted buckling pressures; whereas the ANSYS nonlinear results compared favourably with the experimental results. The PD5500 analysis was very time consuming and tended to grossly underestimate the experimental buckling pressures and in some cases, overestimate them. In contrast to PD5500 and ANSYS, the design charts of Ross of the University of Portsmouth were the easiest of all these methods to use and generally only slightly underestimated the experimental collapse pressures. The ANSYS analyses gave some excellent graphical displays.

  3. Multifunctional substrate of Al alloy based on general hierarchical micro/nanostructures: superamphiphobicity and enhanced corrosion resistance

    Science.gov (United States)

    Li, Xuewu; Shi, Tian; Liu, Cong; Zhang, Qiaoxin; Huang, Xingjiu

    2016-10-01

    Aluminum alloys are vulnerable to penetrating and peeling failures in seawater and preparing a barrier coating to isolate the substrate from corrosive medium is an effective anticorrosion method. Inspired by the lotus leaves effect, a wetting alloy surface with enhanced anticorrosion behavior has been prepared via etch, deposition, and low-surface-energy modification. Results indicate that excellent superamphiphobicity has been achieved after the modification of the constructed hierarchical labyrinth-like microstructures and dendritic nanostructures. The as-prepared surface is also found with good chemical stability and mechanical durability. Furthermore, superior anticorrosion behaviors of the resultant samples in seawater are investigated by electrochemical measurements. Due to trapped air in micro/nanostructures, the newly presented solid-air-liquid contacting interface can help to resist the seawater penetration by greatly reducing the interface interaction between corrosive ions and the superamphiphobic surface. Finally, an optimized two-layer perceptron artificial neural network is set up to model and predict the cause-and-effect relationship between preparation conditions and the anticorrosion parameters. This work provides a great potential to extend the applications of aluminum alloys especially in marine engineering fields.

  4. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    Science.gov (United States)

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  5. Computing Low-Rank Approximation of a Dense Matrix on Multicore CPUs with a GPU and Its Application to Solving a Hierarchically Semiseparable Linear System of Equations

    Directory of Open Access Journals (Sweden)

    Ichitaro Yamazaki

    2015-01-01

    of their low-rank properties. To compute a low-rank approximation of a dense matrix, in this paper, we study the performance of QR factorization with column pivoting or with restricted pivoting on multicore CPUs with a GPU. We first propose several techniques to reduce the postprocessing time, which is required for restricted pivoting, on a modern CPU. We then examine the potential of using a GPU to accelerate the factorization process with both column and restricted pivoting. Our performance results on two eight-core Intel Sandy Bridge CPUs with one NVIDIA Kepler GPU demonstrate that using the GPU, the factorization time can be reduced by a factor of more than two. In addition, to study the performance of our implementations in practice, we integrate them into a recently developed software StruMF which algebraically exploits such low-rank structures for solving a general sparse linear system of equations. Our performance results for solving Poisson's equations demonstrate that the proposed techniques can significantly reduce the preconditioner construction time of StruMF on the CPUs, and the construction time can be further reduced by 10%–50% using the GPU.

  6. A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds.

    Directory of Open Access Journals (Sweden)

    Ana Calabrese

    2011-01-01

    Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.

  7. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  8. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  9. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    Science.gov (United States)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the

  10. A generalized linear-quadratic model incorporating reciprocal time pattern of radiation damage repair

    International Nuclear Information System (INIS)

    Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta

    2012-01-01

    Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.

  11. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    Science.gov (United States)

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  12. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    Science.gov (United States)

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. A new approach in simulating RF linacs using a general, linear real-time signal processor

    International Nuclear Information System (INIS)

    Young, A.; Jachim, S.P.

    1991-01-01

    Strict requirements on the tolerances of the amplitude and phase of the radio frequency (RF) cavity field are necessary to advance the field of accelerator technology. Due to these stringent requirements upon modern accelerators,a new approach of modeling and simulating is essential in developing and understanding their characteristics. This paper describes the implementation of a general, linear model of an RF cavity which is used to develop a real-time signal processor. This device fully emulates the response of an RF cavity upon receiving characteristic parameters (Q 0 , ω 0 , Δω, R S , Z 0 ). Simulating an RF cavity with a real-time signal processor is beneficial to an accelerator designer because the device allows one to answer fundamental questions on the response of the cavity to a particular stimulus without operating the accelerator. In particular, the complex interactions between the RF power and the control systems, the beam and cavity fields can simply be observed in a real-time domain. The signal processor can also be used upon initialization of the accelerator as a diagnostic device and as a dummy load for determining the closed-loop error of the control system. In essence, the signal processor is capable of providing information that allows an operator to determine whether the control systems and peripheral devices are operating properly without going through the tedious procedure of running the beam through a cavity

  14. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    Science.gov (United States)

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  15. Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms

    International Nuclear Information System (INIS)

    Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.

    2008-01-01

    This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events

  16. Reconstruction of X-rays spectra of clinical linear accelerators using the generalized simulated annealing method

    International Nuclear Information System (INIS)

    Manrique, John Peter O.; Costa, Alessandro M.

    2016-01-01

    The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)

  17. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  18. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  19. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    Science.gov (United States)

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Generalized functional linear models for gene-based case-control association studies.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  1. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  2. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    Science.gov (United States)

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  3. Neutrosophic Hierarchical Clustering Algoritms

    Directory of Open Access Journals (Sweden)

    Rıdvan Şahin

    2014-03-01

    Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.

  4. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    Science.gov (United States)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  5. Vertical distribution of paracalanus crassirostris (copepoda, calanoidea: analysis by the general linear model

    Directory of Open Access Journals (Sweden)

    Ana Milstein

    1979-01-01

    Full Text Available The vertical distribution of each developmental stage of Paracalanus crassirostris was studied in a shallow water station at Ubatuba, SP, Brazil (23º30'S-45º07'W. Samples were collected monthly at the surface, 2m and near bottom levels . Salinity, temperature, dissolved oxygen, tide height, light penetration arid solar radiation were also recorded. Data were analysed by the general linear model. It showed that the amount of individuals at any developmental stage is affected diversely by hour, depth, hour-depth interaction and environmental factors throughout the year and that these effects are stronger in summer. All developmental stages were spread in the water column showing no regular vertical migrations. On the other hand, the number of organisms caught in a particular hour seemed to dependmore on the tide than on the animals behaviour. The results of the present paper showed, as observed by some other authors, the lack of vertical migration of a coastal copepod which is a grazer of fine particles throughout its life.A distribuição vertical dos diferentes estádios de desenvolvimento de P. crassirostris foi estudada durante um ano (junho 1976 - maio 1977, numa estação pouco profunda (5 m em Ubatuba. As amostras foram coletadas mensalmente, em tres profundidades, cada quatro horas, com garrafa van Dorn de 9 l registrando-se dados ambientais. Os dados foram processados com a técnica dos Mínimos Quadrados, na forma de uma Aralise de Regressão de um Modelo Linear que inclui covariáveis. O modelo foi construído a priori, considerando densidade de organismos por amostra, fatores ambientais, diferenças entre amostras procedentes de diferentes profundidades e horas, também como interações entre hora e profundidade. Para cada estádio de P. crassirostris, o modelo foi repetido 9 vezes, com os dados de dois meses cada vez, a fim de obter a variação das respostas no ano. Os resultados do modelo indicaram que a quantidade de indiv

  6. The Hierarchical Perspective

    Directory of Open Access Journals (Sweden)

    Daniel Sofron

    2015-05-01

    Full Text Available This paper is focused on the hierarchical perspective, one of the methods for representing space that was used before the discovery of the Renaissance linear perspective. The hierarchical perspective has a more or less pronounced scientific character and its study offers us a clear image of the way the representatives of the cultures that developed it used to perceive the sensitive reality. This type of perspective is an original method of representing three-dimensional space on a flat surface, which characterises the art of Ancient Egypt and much of the art of the Middle Ages, being identified in the Eastern European Byzantine art, as well as in the Western European Pre-Romanesque and Romanesque art. At the same time, the hierarchical perspective is also present in naive painting and infantile drawing. Reminiscences of this method can be recognised also in the works of some precursors of the Italian Renaissance. The hierarchical perspective can be viewed as a subjective ranking criterion, according to which the elements are visually represented by taking into account their relevance within the image while perception is ignored. This paper aims to show how the main objective of the artists of those times was not to faithfully represent the objective reality, but rather to emphasize the essence of the world and its perennial aspects. This may represent a possible explanation for the refusal of perspective in the Egyptian, Romanesque and Byzantine painting, characterised by a marked two-dimensionality.

  7. General methods for determining the linear stability of coronal magnetic fields

    Science.gov (United States)

    Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.

    1988-01-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.

  8. High Order A-stable Continuous General Linear Methods for Solution of Systems of Initial Value Problems in ODEs

    Directory of Open Access Journals (Sweden)

    Dauda GuliburYAKUBU

    2012-12-01

    Full Text Available Accurate solutions to initial value systems of ordinary differential equations may be approximated efficiently by Runge-Kutta methods or linear multistep methods. Each of these has limitations of one sort or another. In this paper we consider, as a middle ground, the derivation of continuous general linear methods for solution of stiff systems of initial value problems in ordinary differential equations. These methods are designed to combine the advantages of both Runge-Kutta and linear multistep methods. Particularly, methods possessing the property of A-stability are identified as promising methods within this large class of general linear methods. We show that the continuous general linear methods are self-starting and have more ability to solve the stiff systems of ordinary differential equations, than the discrete ones. The initial value systems of ordinary differential equations are solved, for instance, without looking for any other method to start the integration process. This desirable feature of the proposed approach leads to obtaining very high accuracy of the solution of the given problem. Illustrative examples are given to demonstrate the novelty and reliability of the methods.

  9. Spatial Modeling of Flood Duration in Amazonian Floodplains Through Radar Remote Sensing and Generalized Linear Models

    Science.gov (United States)

    Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.

    2017-12-01

    Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with

  10. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  11. glmmTMB balances speed and flexibility among packages for Zero-inflated Generalized Linear Mixed Modeling

    DEFF Research Database (Denmark)

    Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.

    2017-01-01

    Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...

  12. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Science.gov (United States)

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  13. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  14. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.

    2012-01-01

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

  15. Recent advances toward a general purpose linear-scaling quantum force field.

    Science.gov (United States)

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  16. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  17. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...

  18. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    Science.gov (United States)

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  19. Linear-algebraic approach to electron-molecule collisions: General formulation

    International Nuclear Information System (INIS)

    Collins, L.A.; Schneider, B.I.

    1981-01-01

    We present a linear-algebraic approach to electron-molecule collisions based on an integral equations form with either logarithmic or asymptotic boundary conditions. The introduction of exchange effects does not alter the basic form or order of the linear-algebraic equations for a local potential. In addition to the standard procedure of directly evaluating the exchange integrals by numerical quadrature, we also incorporate exchange effects through a separable-potential approximation. Efficient schemes are developed for reducing the number of points and channels that must be included. The method is applied at the static-exchange level to a number of molecular systems including H 2 , N 2 , LiH, and CO 2

  20. Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing

    International Nuclear Information System (INIS)

    Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria

    2014-01-01

    In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models

  1. General linear-optical quantum state generation scheme: Applications to maximally path-entangled states

    International Nuclear Information System (INIS)

    VanMeter, N. M.; Lougovski, P.; Dowling, Jonathan P.; Uskov, D. B.; Kieling, K.; Eisert, J.

    2007-01-01

    We introduce schemes for linear-optical quantum state generation. A quantum state generator is a device that prepares a desired quantum state using product inputs from photon sources, linear-optical networks, and postselection using photon counters. We show that this device can be concisely described in terms of polynomial equations and unitary constraints. We illustrate the power of this language by applying the Groebner-basis technique along with the notion of vacuum extensions to solve the problem of how to construct a quantum state generator analytically for any desired state, and use methods of convex optimization to identify bounds to success probabilities. In particular, we disprove a conjecture concerning the preparation of the maximally path-entangled |n,0>+|0,n> (NOON) state by providing a counterexample using these methods, and we derive a new upper bound on the resources required for NOON-state generation

  2. Non-linear partial differential equations an algebraic view of generalized solutions

    CERN Document Server

    Rosinger, Elemer E

    1990-01-01

    A massive transition of interest from solving linear partial differential equations to solving nonlinear ones has taken place during the last two or three decades. The availability of better computers has often made numerical experimentations progress faster than the theoretical understanding of nonlinear partial differential equations. The three most important nonlinear phenomena observed so far both experimentally and numerically, and studied theoretically in connection with such equations have been the solitons, shock waves and turbulence or chaotical processes. In many ways, these phenomen

  3. Continuity and general perturbation of the Drazin inverse for closed linear operators

    Directory of Open Access Journals (Sweden)

    N. Castro González

    2002-01-01

    Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.

  4. Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models

    DEFF Research Database (Denmark)

    Lanne, Markku; Nyberg, Henri

    We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...

  5. Generalized linear differential equations in a Banach space : continuous dependence on a parameter

    Czech Academy of Sciences Publication Activity Database

    Monteiro, G.A.; Tvrdý, Milan

    2013-01-01

    Roč. 33, č. 1 (2013), s. 283-303 ISSN 1078-0947 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized differential equations * continuous dependence * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics Impact factor: 0.923, year: 2013 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=7615

  6. Solution of a General Linear Complementarity Problem Using Smooth Optimization and Its Application to Bilinear Programming and LCP

    International Nuclear Information System (INIS)

    Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.

    2001-01-01

    This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper

  7. Development and linearization of generalized material balance equation for coal bed methane reservoirs

    International Nuclear Information System (INIS)

    Penuela, G; Ordonez R, A; Bejarano, A

    1998-01-01

    A generalized material balance equation was presented at the Escuela de Petroleos de la Universidad Industrial de Santander for coal seam gas reservoirs based on Walsh's method, who worked in an analogous approach for oil and gas conventional reservoirs (Walsh, 1995). Our equation was based on twelve similar assumptions itemized by Walsh for his generalized expression for conventional reservoirs it was started from the same volume balance consideration and was finally reorganized like Walsh (1994) did. Because it is not expressed in terms of traditional (P/Z) plots, as proposed by King (1990), it allows to perform a lot of quantitative and qualitative analyses. It was also demonstrated that the existent equations are only particular cases of the generalized expression evaluated under certain restrictions. This equation is applicable to coal seam gas reservoirs in saturated, equilibrium and under saturated conditions, and to any type of coal beds without restriction on especial values of the constant diffusion

  8. The Exact Solution for Linear Thermoelastic Axisymmetric Deformations of Generally Laminated Circular Cylindrical Shells

    Science.gov (United States)

    Nemeth, Michael P.; Schultz, Marc R.

    2012-01-01

    A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.

  9. Examining secular trend  and seasonality in count data using dynamic generalized linear modelling

    DEFF Research Database (Denmark)

    Lundbye-Christensen, Søren; Dethlefsen, Claus; Gorst-Rasmussen, Anders

    series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. Data  In the period January 1980 to 1999, 17,989 incidents of acute myocardial infarction were recorded in the county of Northern Jutland, Denmark. Records were......Aims  Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation and overdispersion. Methods  The model resembles an ordinary time...... updated daily. Results  The model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicate a good model fit. The analysis with the dynamic model revealed peaks coinciding with influenza epidemics. On average the peak-to-trough ratio is estimated...

  10. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  11. Iterative solution of general sparse linear systems on clusters of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  12. The covariant linear oscillator and generalized realization of the dynamical SU(1,1) symmetry algebra

    International Nuclear Information System (INIS)

    Kagramanov, E.D.; Nagiyev, Sh.M.; Mir-Kasimov, R.M.

    1989-03-01

    An exactly soluble problem for the finite-difference Schroedinger equation in the relativistic configurational space is considered. The appropriate finite-difference generalization of the factorization method is developed. The theory of new special functions ''the relativistic Hermite polynomials'', in which the solutions are expressed, is constructed. (author). 14 refs

  13. Equilibrium arrival times to queues with general service times and non-linear utility functions

    DEFF Research Database (Denmark)

    Breinbjerg, Jesper

    2017-01-01

    by a general utility function which is decreasing in the waiting time and service completion time of each customer. Applications of such queueing games range from people choosing when to arrive at a grand opening sale to travellers choosing when to line up at the gate when boarding an airplane. We develop...

  14. The energy and the linear momentum of space-times in general relativity

    International Nuclear Information System (INIS)

    Schoen, R.; Yau, S.T.

    1981-01-01

    We extend our previous proof of the positive mass conjecture to allow a more general asymptotic condition proposed by York. Hence we are able to prove that for an isolated physical system, the energy momentum four vector is a future timelike vector unless the system is trivial. Furthermore, we allow singularities of the type of black holes. (orig.)

  15. Generalized W1;1-Young Measures and Relaxation of Problems with Linear Growth

    Czech Academy of Sciences Publication Activity Database

    Baia, M.; Krömer, Stefan; Kružík, Martin

    2018-01-01

    Roč. 50, č. 1 (2018), s. 1076-1119 ISSN 0036-1410 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GF16-34894L Institutional support: RVO:67985556 Keywords : lower semicontinuity * quasiconvexity * Young measures Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.648, year: 2016 http://library.utia.cas.cz/2018/MTR/kruzik-0487019.pdf

  16. General, database-driven fast-feedback system for the Stanford Linear Collider

    International Nuclear Information System (INIS)

    Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.; Shoaee, H.

    1991-05-01

    A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs

  17. The generalized PN synthetic acceleration method for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1998-01-01

    The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotopic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) methods is proposed. This method converges (Clock time) faster than the MDSA method. It is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented. (author). 9 refs., 2 tabs., 5 figs

  18. The generalized PN synthetic acceleration method for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1997-01-01

    The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented

  19. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  20. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  1. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  2. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  3. The general linear thermoelastic end problem for solid and hollow cylinders

    International Nuclear Information System (INIS)

    Thompson, J.J.; Chen, P.Y.P.

    1977-01-01

    This paper reports on three topics arising from work in progress on theoretical and computational aspects of the utilization of self equilibrating and load stress systems, to solve thermoelastic problems of finite, or semi-infinite, solid or hollow circular cylinders, with particular reference to the pellets, rods, tubes and shells with arbitrary internal heat generation encountered in Nuclear Reactor Technology. Specifically the work is aimed at the evaluation of stress intensification factors in the end elastic boundary layer region, due to various thermal and mechanical end load conditions, in relation to the external, exact stress solutions, which satisfy conditions on the curved surfaces only and are valid over the remainder of the cylindrical body. More generally, it is possible, at least for symmetric thermoelastic problems, to derive exact external solutions, using self equilibrating end load systems, which describe the stress/displacement state completely as a combination of a simple local plane strain solution and a correction dependent on the magnitude of axial thermal gradients. Thus plane strain, and self equilibrating end load systems are sufficient for the complete external and boundary layer solution of a finite cylindrical body. This formulation is capable of further extension, e.g., to concentric multi-region problems, and provides a useful approach to the study of local stress intensification factors due to thermal perturbations

  4. Generalized magnetic susceptibilities in metals: application of the analytic tetrahedron linear energy method to Sc

    International Nuclear Information System (INIS)

    Rath, J.; Freeman, A.J.

    1975-01-01

    A detailed study of the generalized susceptibility chi(vector q) of Sc metal determined from an accurate augmented-plane-wave method calculation of its energy-band structure is presented. The calculations were done by means of a computational scheme for chi(vector q) derived as an extension of the work of Jepsen and Andersen and Lehmann and Taut on the density-of-states problem. The procedure yields simple analytic expressions for the chi(vector q) integral inside a tetrahedral microzone of the Brillouin zone which depends only on the volume of the tetrahedron and the differences of the energies at its corners. Constant-matrix-element results have been obtained for Sc which show very good agreement with the results of Liu, Gupta, and Sinha (but with one less peak) and exhibit a first maximum in chi(vector q) at (0, 0, 0.31) 2π/c [vs (0, 0, 0.35) 2π/c obtained by Liu et al.] which relates very well to dilute rare-earth alloy magnetic ordering at vector q/sub m/ = (0, 0, 0.28) 2π/c and to the kink in the LA-phonon dispersion curve at (0, 0, 0.27) 2π/c. (U.S.)

  5. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  6. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    Science.gov (United States)

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  7. An algorithm for the construction of substitution box for block ciphers based on projective general linear group

    Directory of Open Access Journals (Sweden)

    Anas Altaleb

    2017-03-01

    Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.

  8. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  9. Integrating Linear Programming and Analytical Hierarchical ...

    African Journals Online (AJOL)

    -GIS to Optimize Land Use Pattern at Watershed Level. ... The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader). If you would like more ...

  10. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    Science.gov (United States)

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Limit of ratio of consecutive terms for general order-k linear homogeneous recurrences with constant coefficients

    International Nuclear Information System (INIS)

    Fiorenza, Alberto; Vincenzi, Giovanni

    2011-01-01

    Research highlights: → We prove a result true for all linear homogeneous recurrences with constant coefficients. → As a corollary of our results we immediately get the celebrated Poincare' theorem. → The limit of the ratio of adjacent terms is characterized as the unique leading root of the characteristic polynomial. → The Golden Ratio, Kepler limit of the classical Fibonacci sequence, is the unique leading root. → The Kepler limit may differ from the unique root of maximum modulus and multiplicity. - Abstract: For complex linear homogeneous recursive sequences with constant coefficients we find a necessary and sufficient condition for the existence of the limit of the ratio of consecutive terms. The result can be applied even if the characteristic polynomial has not necessarily roots with modulus pairwise distinct, as in the celebrated Poincare's theorem. In case of existence, we characterize the limit as a particular root of the characteristic polynomial, which depends on the initial conditions and that is not necessarily the unique root with maximum modulus and multiplicity. The result extends to a quite general context the way used to find the Golden mean as limit of ratio of consecutive terms of the classical Fibonacci sequence.

  12. PWR in-core nuclear fuel management optimization utilizing nodal (non-linear NEM) generalized perturbation theory

    International Nuclear Information System (INIS)

    Maldonado, G.I.; Turinsky, P.J.; Kropaczek, D.J.

    1993-01-01

    The computational capability of efficiently and accurately evaluate reactor core attributes (i.e., k eff and power distributions as a function of cycle burnup) utilizing a second-order accurate advanced nodal Generalized Perturbation Theory (GPT) model has been developed. The GPT model is derived from the forward non-linear iterative Nodal Expansion Method (NEM) strategy, thereby extending its inherent savings in memory storage and high computational efficiency to also encompass GPT via the preservation of the finite-difference matrix structure. The above development was easily implemented into the existing coarse-mesh finite-difference GPT-based in-core fuel management optimization code FORMOSA-P, thus combining the proven robustness of its adaptive Simulated Annealing (SA) multiple-objective optimization algorithm with a high-fidelity NEM GPT neutronics model to produce a powerful computational tool used to generate families of near-optimum loading patterns for PWRs. (orig.)

  13. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    Science.gov (United States)

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Two-step algorithm of generalized PAPA method applied to linear programming solution of dynamic matrix control

    International Nuclear Information System (INIS)

    Shimizu, Yoshiaki

    1991-01-01

    In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)

  15. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer's Disease.

    Science.gov (United States)

    Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W

    2018-03-05

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.

  16. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    Science.gov (United States)

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  17. Hierarchical Rhetorical Sentence Categorization for Scientific Papers

    Science.gov (United States)

    Rachman, G. H.; Khodra, M. L.; Widyantoro, D. H.

    2018-03-01

    Important information in scientific papers can be composed of rhetorical sentences that is structured from certain categories. To get this information, text categorization should be conducted. Actually, some works in this task have been completed by employing word frequency, semantic similarity words, hierarchical classification, and the others. Therefore, this paper aims to present the rhetorical sentence categorization from scientific paper by employing TF-IDF and Word2Vec to capture word frequency and semantic similarity words and employing hierarchical classification. Every experiment is tested in two classifiers, namely Naïve Bayes and SVM Linear. This paper shows that hierarchical classifier is better than flat classifier employing either TF-IDF or Word2Vec, although it increases only almost 2% from 27.82% when using flat classifier until 29.61% when using hierarchical classifier. It shows also different learning model for child-category can be built by hierarchical classifier.

  18. Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2017-03-01

    Full Text Available Deep convolutional neural networks (CNNs have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general features from pre-trained deep CNNs for remote scene classification. These two architectures suggest two directions for improvement. First, before the pre-trained deep CNNs, we design a linear PCA network (LPCANet to synthesize spatial information of remote sensing images in each spectral channel. This design shortens the spatial “distance” of target and source datasets for pre-trained deep CNNs. Second, we introduce quaternion algebra to LPCANet, which further shortens the spectral “distance” between remote sensing images and images used to pre-train deep CNNs. With five well-known pre-trained deep CNNs, experimental results on three independent remote sensing datasets demonstrate that our proposed framework obtains state-of-the-art results without fine-tuning and feature fusing. This paper also provides baseline for transferring fresh pretrained deep CNNs to other remote sensing tasks.

  19. Hierarchical quark mass matrices

    International Nuclear Information System (INIS)

    Rasin, A.

    1998-02-01

    I define a set of conditions that the most general hierarchical Yukawa mass matrices have to satisfy so that the leading rotations in the diagonalization matrix are a pair of (2,3) and (1,2) rotations. In addition to Fritzsch structures, examples of such hierarchical structures include also matrices with (1,3) elements of the same order or even much larger than the (1,2) elements. Such matrices can be obtained in the framework of a flavor theory. To leading order, the values of the angle in the (2,3) plane (s 23 ) and the angle in the (1,2) plane (s 12 ) do not depend on the order in which they are taken when diagonalizing. We find that any of the Cabbibo-Kobayashi-Maskawa matrix parametrizations that consist of at least one (1,2) and one (2,3) rotation may be suitable. In the particular case when the s 13 diagonalization angles are sufficiently small compared to the product s 12 s 23 , two special CKM parametrizations emerge: the R 12 R 23 R 12 parametrization follows with s 23 taken before the s 12 rotation, and vice versa for the R 23 R 12 R 23 parametrization. (author)

  20. Use of reflectance spectrophotometry and colorimetry in a general linear model for the determination of the age of bruises.

    Science.gov (United States)

    Hughes, Vanessa K; Langlois, Neil E I

    2010-12-01

    Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This

  1. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    Science.gov (United States)

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  2. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  3. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  6. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    Science.gov (United States)

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  8. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  9. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  10. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  11. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    Science.gov (United States)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  12. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro

    2017-08-30

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  13. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l

    2017-01-01

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  14. Assessing the economic impact of paternal involvement: a comparison of the generalized linear model versus decision analysis trees.

    Science.gov (United States)

    Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P

    2014-08-01

    Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.

  15. A linearizing transformation for the Korteweg-de Vries equation; generalizations to higher-dimensional nonlinear partial differential equations

    NARCIS (Netherlands)

    Dorren, H.J.S.

    1998-01-01

    It is shown that the Korteweg–de Vries (KdV) equation can be transformed into an ordinary linear partial differential equation in the wave number domain. Explicit solutions of the KdV equation can be obtained by subsequently solving this linear differential equation and by applying a cascade of

  16. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  17. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    Science.gov (United States)

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  18. Positive solution of non-square fully Fuzzy linear system of equation in general form using least square method

    Directory of Open Access Journals (Sweden)

    Reza Ezzati

    2014-08-01

    Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  19. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  20. Scalable Hierarchical Algorithms for stochastic PDEs and UQ

    KAUST Repository

    Litvinenko, Alexander; Chá vez, Gustavo; Keyes,David; Ltaief, Hatem; Yokota, Rio

    2015-01-01

    number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered

  1. Zeolitic materials with hierarchical porous structures.

    Science.gov (United States)

    Lopez-Orozco, Sofia; Inayat, Amer; Schwab, Andreas; Selvam, Thangaraj; Schwieger, Wilhelm

    2011-06-17

    During the past several years, different kinds of hierarchical structured zeolitic materials have been synthesized due to their highly attractive properties, such as superior mass/heat transfer characteristics, lower restriction of the diffusion of reactants in the mesopores, and low pressure drop. Our contribution provides general information regarding types and preparation methods of hierarchical zeolitic materials and their relative advantages and disadvantages. Thereafter, recent advances in the preparation and characterization of hierarchical zeolitic structures within the crystallites by post-synthetic treatment methods, such as dealumination or desilication; and structured devices by in situ and ex situ zeolite coatings on open-cellular ceramic foams as (non-reactive as well as reactive) supports are highlighted. Specific advantages of using hierarchical zeolitic catalysts/structures in selected catalytic reactions, such as benzene to phenol (BTOP) and methanol to olefins (MTO) are presented. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  3. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  4. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  5. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  6. Towards a Robust Solution of the Non-Linear Kinematics for the General Stewart Platform with Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Eusebio Eduardo Hernández Martinez

    2013-01-01

    Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.

  7. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...

  8. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

  9. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  10. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  11. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    Science.gov (United States)

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  12. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: IV. Generalized matrix analysis of linear compartment systems.

    Science.gov (United States)

    Langenbucher, Frieder

    2005-01-01

    A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.

  13. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    Science.gov (United States)

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  14. Generalized randomly amplified linear system driven by Gaussian noises: Extreme heavy tail and algebraic correlation decay in plasma turbulence

    International Nuclear Information System (INIS)

    Steinbrecher, Gyoergy; Weyssow, B.

    2004-01-01

    The extreme heavy tail and the power-law decay of the turbulent flux correlation observed in hot magnetically confined plasmas are modeled by a system of coupled Langevin equations describing a continuous time linear randomly amplified stochastic process where the amplification factor is driven by a superposition of colored noises which, in a suitable limit, generate a fractional Brownian motion. An exact analytical formula for the power-law tail exponent β is derived. The extremely small value of the heavy tail exponent and the power-law distribution of laminar times also found experimentally are obtained, in a robust manner, for a wide range of input values, as a consequence of the (asymptotic) self-similarity property of the noise spectrum. As a by-product, a new representation of the persistent fractional Brownian motion is obtained

  15. GPCG - generalized preconditioned CG method and its use with non-linear and non-symmetric displacement decomposition preconditioners

    Czech Academy of Sciences Publication Activity Database

    Blaheta, Radim

    2002-01-01

    Roč. 9, 6/7 (2002), s. 525-550 ISSN 1070-5325 Grant - others:INCO Copernicus(XE) KIT977006 Institutional research plan: CEZ:AV0Z3086906 Keywords : elasticity * displacement decomposition Subject RIV: BA - General Mathematics Impact factor: 0.706, year: 2002

  16. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  17. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    Science.gov (United States)

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  18. Observer-based distributed adaptive fault-tolerant containment control of multi-agent systems with general linear dynamics.

    Science.gov (United States)

    Ye, Dan; Chen, Mengmeng; Li, Kui

    2017-11-01

    In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. On the generalization of linear least mean squares estimation to quantum systems with non-commutative outputs

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)

    2015-12-15

    The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)

  20. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  1. The Relationship between Spousal Age Difference and Violence against Wife in Nigeria: A Generalized Linear Modelling Approach

    Directory of Open Access Journals (Sweden)

    Anthony C. Akpanta

    2017-11-01

    Full Text Available The act of violence against wife is condemnable and attracts various legal penalties, globally. This article attempts to find a link between spousal age difference and violence (Emotional, Physical and Sexual against wives in Nigeria. The result show that wives who are older than their partners are more likely to experience sexual and emotional violence; also, wives who are same age as their husbands are more likely to experience sexual violence; whereas wives who are 1-4 years younger than their husbands are more likely to experience physical violence; while wives 5 years or more younger than their husbands are generally less likely to experience any form of violence.

  2. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    International Nuclear Information System (INIS)

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-01-01

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute

  3. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    Energy Technology Data Exchange (ETDEWEB)

    Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  4. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  5. Bias factors associated with assessing the validity of university teaching: a hierarchical linear model Factores de sesgo asociados a la validez de la evaluación docente universitaria: un modelo jerárquico lineal.

    Directory of Open Access Journals (Sweden)

    Raciel Acevedo Alvarez

    2006-12-01

    Full Text Available The present study analyzes the variables that are intrinsically linked with the student, professor and class environment in relation to the university educational evaluation questionnaires. The participants in the study were 374 students with an age mean of 19.9 and 29 professors with an age mean of 36 from 3 different departments at the Universidad de Costa Rica (UCR at the city of Guanacaste. The hierarchical lineal models were used for the data analysis, a quantitative methodology which facilitates the evaluation of the determinants which affect the results of the study. However, only four of these determinants were associated with the evaluation concerned, class size, enrolment year, department type and forecasted achievement levels. The results obtained from the study demonstrate that these kinds of evaluation are valid despite the results being slightly affected by a range of factors from externalities to teacher competence. El presente estudio analiza las variables del estudiante, la clase y el profesor asociadas con el sesgo en los cuestionarios aplicados a los estudiantes en la evaluación docente universitaria. En la propuesta han participado 374 estudiantes y 29 profesores de tres departamentos de la Universidad de Costa Rica (UCR, sede de Guanacaste, con una media de edad de 19.9 para los estudiantes y de 36 años los profesores. Para el análisis de los datos se utilizaron los modelos jerárquicos lineales, una metodología cuantitativa, cuyas estimaciones permitieron comprobar que de todos los determinantes incluidos en el estudio, solamente, cuatro de ellos (número de estudiantes en el curso, cantidad de años en la institución, tipo de departamento al que pertenece el estudiante y expectativa de nota estaban ligeramente asociados a este tipo de evaluación. Los resultados demuestran que estas evaluaciones son válidas y se ven poco afectadas por los elementos externos a la competencia docente.

  6. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    Science.gov (United States)

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  7. Hierarchical wave functions revisited

    International Nuclear Information System (INIS)

    Li Dingping.

    1997-11-01

    We study the hierarchical wave functions on a sphere and on a torus. We simplify some wave functions on a sphere or a torus using the analytic properties of wave functions. The open question, the construction of the wave function for quasi electron excitation on a torus, is also solved in this paper. (author)

  8. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  9. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  10. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  11. A Hierarchical Dispatch Structure for Distribution Network Pricing

    OpenAIRE

    Yuan, Zhao; Hesamzadeh, Mohammad Reza

    2015-01-01

    This paper presents a hierarchical dispatch structure for efficient distribution network pricing. The dispatch coordination problem in the context of hierarchical network operators are addressed. We formulate decentralized generation dispatch into a bilevel optimization problem in which main network operator and the connected distribution network operator optimize their costs in two levels. By using Karush-Kuhn-Tucker conditions and Fortuny-Amat McCarl linearization, the bilevel optimization ...

  12. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  13. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-11-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  14. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  15. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  16. Hierarchically Structured Electrospun Fibers

    Science.gov (United States)

    2013-01-07

    in the natural lotus and silver ragwort leaves. Figure 4. Examples of electrospun bio-mimics of natural hierarchical structures. (A) Lotus leaf...B) pillared poly(methyl methacrylate) (PMMA) electrospun fiber mimic; (C) silver ragwort leaf; (D) electrospun fiber mimic made from nylon 6 and...domains containing the protein in the surrounding EVA fibers [115]. A wide variety of core-shell fibers have been generated, including PCL/ gelatin

  17. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  18. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  19. Distributed hierarchical radiation monitoring system

    International Nuclear Information System (INIS)

    Barak, D.

    1985-01-01

    A solution to the problem of monitoring the radiation levels in and around a nuclear facility is presented in this paper. This is a private case of a large scale general purpose data acqisition system with high reliability, availability and short maintenance time. The physical layout of the detectors in the plant, and the strict control demands dictated a distributed and hierarchical system. The system is comprised of three levels, each level contains modules. Level one contains the Control modules which collects data from groups of detectors and executes emergency local control tasks. In level two are the Group controllers which concentrate data from the Control modules, and enable local display and communication. The system computer is in level three, enabling the plant operator to receive information from the detectors and execute control tasks. The described system was built and is operating successfully for about two years. (author)

  20. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    Science.gov (United States)

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  1. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    Science.gov (United States)

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  2. Reconstruction of X-rays spectra of clinical linear accelerators using the generalized simulated annealing method; Reconstrucao de espectros de raios-X de aceleradores lineares clinicos usando o metodo de recozimento simulado generalizado

    Energy Technology Data Exchange (ETDEWEB)

    Manrique, John Peter O.; Costa, Alessandro M., E-mail: johnp067@usp.br, E-mail: amcosta@usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil)

    2016-07-01

    The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR{sub 20/10}. (author)

  3. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  4. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose...... a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  5. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    Science.gov (United States)

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Hierarchical partial order ranking

    International Nuclear Information System (INIS)

    Carlsen, Lars

    2008-01-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  7. On the characterization of dynamic supramolecular systems: a general mathematical association model for linear supramolecular copolymers and application on a complex two-component hydrogen-bonding system.

    Science.gov (United States)

    Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth

    2007-01-01

    A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.

  8. Flow and transport in hierarchically fractured systems

    International Nuclear Information System (INIS)

    Karasaki, K.

    1993-01-01

    Preliminary results indicate that flow in the saturated zone at Yucca Mountain is controlled by fractures. A current conceptual model assumes that the flow in the fracture system can be approximately by a three-dimensionally interconnected network of linear conduits. The overall flow system of rocks at Yucca Mountain is considered to consist of hierarchically structured heterogeneous fracture systems of multiple scales. A case study suggests that it is more appropriate to use the flow parameters of the large fracture system for predicting the first arrival time, rather than using the bulk average parameters of the total system

  9. Transmutations across hierarchical levels

    International Nuclear Information System (INIS)

    O'Neill, R.V.

    1977-01-01

    The development of large-scale ecological models depends implicitly on a concept known as hierarchy theory which views biological systems in a series of hierarchical levels (i.e., organism, population, trophic level, ecosystem). The theory states that an explanation of a biological phenomenon is provided when it is shown to be the consequence of the activities of the system's components, which are themselves systems in the next lower level of the hierarchy. Thus, the behavior of a population is explained by the behavior of the organisms in the population. The initial step in any modeling project is, therefore, to identify the system components and the interactions between them. A series of examples of transmutations in aquatic and terrestrial ecosystems are presented to show how and why changes occur. The types of changes are summarized and possible implications of transmutation for hierarchy theory, for the modeler, and for the ecological theoretician are discussed

  10. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  11. Statistical dynamics of ultradiffusion in hierarchical systems

    International Nuclear Information System (INIS)

    Gardner, S.

    1987-01-01

    In many types of disordered systems which exhibit frustration and competition, an ultrametric topology is found to exist in the space of allowable states. This ultrametric topology of states is associated with a hierarchical relaxation process called ultradiffusion. Ultradiffusion occurs in hierarchical non-linear (HNL) dynamical systems when constraints cause large scale, slow modes of motion to be subordinated to small scale, fast modes. Examples of ultradiffusion are found throughout condensed matter physics and critical phenomena (e.g. the states of spin glasses), in biophysics (e.g. the states of Hopfield networks) and in many other fields including layered computing based upon nonlinear dynamics. The statistical dynamics of ultradiffusion can be treated as a random walk on an ultrametric space. For reversible bifurcating ultrametric spaces the evolution equation governing the probability of a particle being found at site i at time t has a highly degenerate transition matrix. This transition matrix has a fractal geometry similar to the replica form proposed for spin glasses. The authors invert this fractal matrix using a recursive quad-tree (QT) method. Possible applications of hierarchical systems to communications and symbolic computing are discussed briefly

  12. Linear shaped charge

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  13. Assessment of Brown Bear\\'s (Ursus arctos syriacus Winter Habitat Using Geographically Weighted Regression and Generalized Linear Model in South of Iran

    Directory of Open Access Journals (Sweden)

    A. A. Zarei

    2016-03-01

    Full Text Available Winter dens are one of the important components of brown bear's (Ursus arctos syriacus habitat, affecting their reproduction and survival. Therefore identification of factors affecting the habitat selection and suitable denning areas in the conservation of our largest carnivore is necessary. We used Geographically Weighted Logistic Regression (GWLR and Generalized Linear Model (GLM for modeling suitability of denning habitat in Kouhkhom region in Fars province. In the present research, 20 dens (presence locations and 20 caves where signs of bear were not found (absence locations were used as dependent variables and six environmental factors were used for each location as independent variables. The results of GLM showed that variables of distance to settlements, altitude, and distance to water were the most important parameters affecting suitability of the brown bear's denning habitat. The results of GWLR showed the significant local variations in the relationship between occurrence of brown bear dens and the variable of distance to settlements. Based on the results of both models, suitable habitats for denning of the species are impassable areas in the mountains and inaccessible for humans.

  14. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    Science.gov (United States)

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  15. Equivalence of linear canonical transform domains to fractional Fourier domains and the bicanonical width product: a generalization of the space-bandwidth product.

    Science.gov (United States)

    Oktem, Figen S; Ozaktas, Haldun M

    2010-08-01

    Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.

  16. Application of a general self-consistency scheme in the linear combination of atomic orbitals formalism to the electronic and structural properties of Si and W

    International Nuclear Information System (INIS)

    Chan, C.T.; Vanderbilt, D.; Louie, S.G.; Materials and Molecular Research Division, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720)

    1986-01-01

    We present a general self-consistency procedure formulated in momentum space for electronic structure and total-energy calculations of crystalline solids. It is shown that both the charge density and the change in the Hamiltonian matrix elements in each iteration can be calculated in a straight-forward fashion once a set of overlap matrices is computed. The present formulation has the merit of bringing the self-consistency problem for different basis sets to the same footing. The scheme is used to extend a first-principles pseudopotential linear combination of Gaussian orbitals method to full point-by-point self-consistency, without refitting of potentials. It is shown that the set of overlap matrices can be calculated very efficiently if we exploit the translational and space-group symmetries of the system under consideration. This scheme has been applied to study the structural and electronic properties of Si and W, prototypical systems of very different bonding properties. The results agree well with experiment and other calculations. The fully self-consistent results are compared with those obtained by a variational procedure [J. R. Chelikowsky and S. G. Louie, Phys. Rev. B 29, 3470 (1984)]. We find that the structural properties for bulk Si and W (both systems have no interatomic charge transfer) can be treated accurately by the variational procedure. However, full self-consistency is needed for an accurate description of the band energies

  17. Enhancing Classification Performance of Functional Near-Infrared Spectroscopy- Brain–Computer Interface Using Adaptive Estimation of General Linear Model Coefficients

    Directory of Open Access Journals (Sweden)

    Nauman Khalid Qureshi

    2017-07-01

    Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.

  18. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  19. Perceived Resources as a Predictor of Satisfaction with Food-Related Life among Chilean Elderly: An Approach with Generalized Linear Models.

    Science.gov (United States)

    Lobos, G; Schnettler, B; Grunert, K G; Adasme, C

    2017-01-01

    The main objective of this study is to show why perceived resources are a strong predictor of satisfaction with food-related life in Chilean older adults. Design, sampling and participants: A survey was conducted in rural and urban areas in 30 communes of the Maule Region with 785 participants over 60 years of age who live in their own homes. The Satisfaction with Food-related Life (SWFL) scale was used. Generalized linear models (GLM) were used for the regression analysis. The results led to different considerations: First, older adults' perceived levels of resources are a good reflection of their actual levels of resources. Second, the individuals rated the sum of the perceived resources as 'highly important' to explain older adults' satisfaction with food-related life. Third, SWFL was predicted by satisfaction with economic situation, family importance, quantity of domestic household goods and a relative health indicator. Fourth, older adults who believe they have more resources compared to others are more satisfied with their food-related life. Finally, Poisson and binomial logistic models showed that the sum of perceived resources significantly increased the prediction of SWFL. The main conclusion is that perceived personal resources are a strong predictor of SWFL in Chilean older adults.

  20. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    Science.gov (United States)

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  1. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Chavez, Gustavo; Keyes, David E.; Ltaief, Hatem; Yokota, Rio

    2015-01-01

    number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R

  2. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H

  3. Quaternion Linear Canonical Transform Application

    OpenAIRE

    Bahri, Mawardi

    2015-01-01

    Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter

  4. Biominerals- hierarchical nanocomposites: the example of bone

    Science.gov (United States)

    Beniash, Elia

    2010-01-01

    Many organisms incorporate inorganic solids in their tissues to enhance their functional, primarily mechanical, properties. These mineralized tissues, also called biominerals, are unique organo-mineral nanocomposites, organized at several hierarchical levels, from nano- to macroscale. Unlike man made composite materials, which often are simple physical blends of their components, the organic and inorganic phases in biominerals interface at the molecular level. Although these tissues are made of relatively weak components at ambient conditions, their hierarchical structural organization and intimate interactions between different elements lead to superior mechanical properties. Understanding basic principles of formation, structure and functional properties of these tissues might lead to novel bioinspired strategies for material design and better treatments for diseases of the mineralized tissues. This review focuses on general principles of structural organization, formation and functional properties of biominerals on the example the bone tissues. PMID:20827739

  5. Linear algebra

    CERN Document Server

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  6. Hierarchical Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Di Lu

    2018-01-01

    Full Text Available The Internet of Things (IoT generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA. It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.

  7. Hierarchical Linked Views

    Energy Technology Data Exchange (ETDEWEB)

    Erbacher, Robert; Frincke, Deb

    2007-07-02

    Coordinated views have proven critical to the development of effective visualization environments. This results from the fact that a single view or representation of the data cannot show all of the intricacies of a given data set. Additionally, users will often need to correlate more data parameters than can effectively be integrated into a single visual display. Typically, development of multiple-linked views results in an adhoc configuration of views and associated interactions. The hierarchical model we are proposing is geared towards more effective organization of such environments and the views they encompass. At the same time, this model can effectively integrate much of the prior work on interactive and visual frameworks. Additionally, we expand the concept of views to incorporate perceptual views. This is related to the fact that visual displays can have information encoded at various levels of focus. Thus, a global view of the display provides overall trends of the data while focusing in on individual elements provides detailed specifics. By integrating interaction and perception into a single model, we show how one impacts the other. Typically, interaction and perception are considered separately, however, when interaction is being considered at a fundamental level and allowed to direct/modify the visualization directly we must consider them simultaneously and how they impact one another.

  8. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

    Science.gov (United States)

    Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

    2017-06-01

    The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.

  9. Monitoring of human brain functions in risk decision-making task by diffuse optical tomography using voxel-wise general linear model

    Science.gov (United States)

    Lin, Zi-Jing; Li, Lin; Cazzell, Marry; Liu, Hanli

    2013-03-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive imaging technique which measures the hemodynamic changes that reflect the brain activity. Diffuse optical tomography (DOT), a variant of fNIRS with multi-channel NIRS measurements, has demonstrated capability of three dimensional (3D) reconstructions of hemodynamic changes due to the brain activity. Conventional method of DOT image analysis to define the brain activation is based upon the paired t-test between two different states, such as resting-state versus task-state. However, it has limitation because the selection of activation and post-activation period is relatively subjective. General linear model (GLM) based analysis can overcome this limitation. In this study, we combine the 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with the risk-decision making process. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The balloon analogue risk task (BART) is a valid experimental model and has been commonly used in behavioral measures to assess human risk taking action and tendency while facing risks. We have utilized the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making. Voxel-wise GLM analysis was performed on 18human participants (10 males and 8females).In this work, we wish to demonstrate the feasibility of using voxel-wise GLM analysis to image and study cognitive functions in response to risk decision making by DOT. Results have shown significant changes in the dorsal lateral prefrontal cortex (DLPFC) during the active choice mode and a different hemodynamic pattern between genders, which are in good agreements with published literatures in functional magnetic resonance imaging (fMRI) and fNIRS studies.

  10. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    Science.gov (United States)

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Science.gov (United States)

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  12. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms.

    Science.gov (United States)

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment.

  13. What are hierarchical models and how do we analyze them?

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  14. Extension of mixture-of-experts networks for binary classification of hierarchical data.

    Science.gov (United States)

    Ng, Shu-Kay; McLachlan, Geoffrey J

    2007-09-01

    For many applied problems in the context of medically relevant artificial intelligence, the data collected exhibit a hierarchical or clustered structure. Ignoring the interdependence between hierarchical data can result in misleading classification. In this paper, we extend the mechanism for mixture-of-experts (ME) networks for binary classification of hierarchical data. Another extension is to quantify cluster-specific information on data hierarchy by random effects via the generalized linear mixed-effects model (GLMM). The extension of ME networks is implemented by allowing for correlation in the hierarchical data in both the gating and expert networks via the GLMM. The proposed model is illustrated using a real thyroid disease data set. In our study, we consider 7652 thyroid diagnosis records from 1984 to early 1987 with complete information on 20 attribute values. We obtain 10 independent random splits of the data into a training set and a test set in the proportions 85% and 15%. The test sets are used to assess the generalization performance of the proposed model, based on the percentage of misclassifications. For comparison, the results obtained from the ME network with independence assumption are also included. With the thyroid disease data, the misclassification rate on test sets for the extended ME network is 8.9%, compared to 13.9% for the ME network. In addition, based on model selection methods described in Section 2, a network with two experts is selected. These two expert networks can be considered as modeling two groups of patients with high and low incidence rates. Significant variation among the predicted cluster-specific random effects is detected in the patient group with low incidence rate. It is shown that the extended ME network outperforms the ME network for binary classification of hierarchical data. With the thyroid disease data, useful information on the relative log odds of patients with diagnosed conditions at different periods can be

  15. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  16. Hierarchical materials: Background and perspectives

    DEFF Research Database (Denmark)

    2016-01-01

    Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...

  17. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  18. Hierarchical architecture of active knits

    International Nuclear Information System (INIS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2013-01-01

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm. (paper)

  19. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  20. A Generalization of Pythagoras's Theorem and Application to Explanations of Variance Contributions in Linear Models. Research Report. ETS RR-14-18

    Science.gov (United States)

    Carlson, James E.

    2014-01-01

    Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…

  1. Linear integrated circuits

    CERN Document Server

    Carr, Joseph

    1996-01-01

    The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa

  2. Linear gate

    International Nuclear Information System (INIS)

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  3. Linear Accelerators

    International Nuclear Information System (INIS)

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  4. Deliberate change without hierarchical influence?

    DEFF Research Database (Denmark)

    Nørskov, Sladjana; Kesting, Peter; Ulhøi, John Parm

    2017-01-01

    reveals that deliberate change is indeed achievable in a non-hierarchical collaborative OSS community context. However, it presupposes the presence and active involvement of informal change agents. The paper identifies and specifies four key drivers for change agents’ influence. Originality....../value The findings contribute to organisational analysis by providing a deeper understanding of the importance of leadership in making deliberate change possible in non-hierarchical settings. It points to the importance of “change-by-conviction”, essentially based on voluntary behaviour. This can open the door...

  5. Linear Text vs. Non-Linear Hypertext in Handheld Computers: Effects on Declarative and Structural Knowledge, and Learner Motivation

    Science.gov (United States)

    Son, Chanhee; Park, Sanghoon; Kim, Minjeong

    2011-01-01

    This study compared linear text-based and non-linear hypertext-based instruction in a handheld computer regarding effects on two different levels of knowledge (declarative and structural knowledge) and learner motivation. Forty four participants were randomly assigned to one of three experimental conditions: linear text, hierarchical hypertext,…

  6. Hierarchical tone mapping for high dynamic range image visualization

    Science.gov (United States)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  7. Linear entropy and collapse–revival phenomenon for a general formalism N-type four-level atom interacting with a single-mode field

    Science.gov (United States)

    Eied, A. A.

    2018-05-01

    In this paper, the linear entropy and collapse-revival phenomenon through the relation ( -{\\bar{n}}) in a system of N-configuration four-level atom interacting with a single-mode field with additional forms of nonlinearities of both the field and the intensity-dependent atom-field coupling functional are investigated. A factorization of the initial density operator is assumed, considering the field to be initially in a squeezed coherent states and the atom initially in its most upper excited state. The dynamical behavior of the linear entropy and the time evolution of ( -{\\bar{n}}) are analyzed. In particular, the effects of the mean photon number, detuning, Kerr-like medium and the intensity-dependent coupling functional on the entropy and the evolution of ( -{\\bar{n}}) are examined.

  8. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  9. Modular networks with hierarchical organization

    Indian Academy of Sciences (India)

    Several networks occurring in real life have modular structures that are arranged in a hierarchical fashion. In this paper, we have proposed a model for such networks, using a stochastic generation method. Using this model we show that, the scaling relation between the clustering and degree of the nodes is not a necessary ...

  10. Hierarchical Microaggressions in Higher Education

    Science.gov (United States)

    Young, Kathryn; Anderson, Myron; Stewart, Saran

    2015-01-01

    Although there has been substantial research examining the effects of microaggressions in the public sphere, there has been little research that examines microaggressions in the workplace. This study explores the types of microaggressions that affect employees at universities. We coin the term "hierarchical microaggression" to represent…

  11. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  12. Molecular simulation of adsorption and transport in hierarchical porous materials.

    Science.gov (United States)

    Coasne, Benoit; Galarneau, Anne; Gerardin, Corine; Fajula, François; Villemot, François

    2013-06-25

    Adsorption and transport in hierarchical porous solids with micro- (~1 nm) and mesoporosities (>2 nm) are investigated by molecular simulation. Two models of hierarchical solids are considered: microporous materials in which mesopores are carved out (model A) and mesoporous materials in which microporous nanoparticles are inserted (model B). Adsorption isotherms for model A can be described as a linear combination of the adsorption isotherms for pure mesoporous and microporous solids. In contrast, adsorption in model B departs from adsorption in pure microporous and mesoporous solids; the inserted microporous particles act as defects, which help nucleate the liquid phase within the mesopore and shift capillary condensation toward lower pressures. As far as transport under a pressure gradient is concerned, the flux in hierarchical materials consisting of microporous solids in which mesopores are carved out obeys the Navier-Stokes equation so that Darcy's law is verified within the mesopore. Moreover, the flow in such materials is larger than in a single mesopore, due to the transfer between micropores and mesopores. This nonzero velocity at the mesopore surface implies that transport in such hierarchical materials involves slippage at the mesopore surface, although the adsorbate has a strong affinity for the surface. In contrast to model A, flux in model B is smaller than in a single mesopore, as the nanoparticles act as constrictions that hinder transport. By a subtle effect arising from fast transport in the mesopores, the presence of mesopores increases the number of molecules in the microporosity in hierarchical materials and, hence, decreases the flow in the micropores (due to mass conservation). As a result, we do not observe faster diffusion in the micropores of hierarchical materials upon flow but slower diffusion, which increases the contact time between the adsorbate and the surface of the microporosity.

  13. Minimax terminal approach problem in two-level hierarchical nonlinear discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F., E-mail: afshorikov@mail.ru [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia Institute of Mathematics and Mechanics, Ural Branch of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2015-11-30

    We consider a discrete–time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector nonlinear or linear discrete–time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solving.

  14. Linear algebra

    CERN Document Server

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  15. Hierarchically structured, nitrogen-doped carbon membranes

    KAUST Repository

    Wang, Hong; Wu, Tao

    2017-01-01

    The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced

  16. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  17. Linear algebra

    CERN Document Server

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  18. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  19. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  20. Hierarchical structure of stock price fluctuations in financial markets

    International Nuclear Information System (INIS)

    Gao, Ya-Chun; Cai, Shi-Min; Wang, Bing-Hong

    2012-01-01

    The financial market and turbulence have been broadly compared on account of the same quantitative methods and several common stylized facts they share. In this paper, the She–Leveque (SL) hierarchy, proposed to explain the anomalous scaling exponents deviating from Kolmogorov monofractal scaling of the velocity fluctuation in fluid turbulence, is applied to study and quantify the hierarchical structure of stock price fluctuations in financial markets. We therefore observed certain interesting results: (i) the hierarchical structure related to multifractal scaling generally presents in all the stock price fluctuations we investigated. (ii) The quantitatively statistical parameters that describe SL hierarchy are different between developed financial markets and emerging ones, distinctively. (iii) For the high-frequency stock price fluctuation, the hierarchical structure varies with different time periods. All these results provide a novel analogy in turbulence and financial market dynamics and an insight to deeply understand multifractality in financial markets. (paper)

  1. Ways of looking ahead: hierarchical planning in language production.

    Science.gov (United States)

    Lee, Eun-Kyung; Brown-Schmidt, Sarah; Watson, Duane G

    2013-12-01

    It is generally assumed that language production proceeds incrementally, with chunks of linguistic structure planned ahead of speech. Extensive research has examined the scope of language production and suggests that the size of planned chunks varies across contexts (Ferreira & Swets, 2002; Wagner & Jescheniak, 2010). By contrast, relatively little is known about the structure of advance planning, specifically whether planning proceeds incrementally according to the surface structure of the utterance, or whether speakers plan according to the hierarchical relationships between utterance elements. In two experiments, we examine the structure and scope of lexical planning in language production using a picture description task. Analyses of speech onset times and word durations show that speakers engage in hierarchical planning such that structurally dependent lexical items are planned together and that hierarchical planning occurs for both direct and indirect dependencies. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Linear response theory for quantum open systems

    OpenAIRE

    Wei, J. H.; Yan, YiJing

    2011-01-01

    Basing on the theory of Feynman's influence functional and its hierarchical equations of motion, we develop a linear response theory for quantum open systems. Our theory provides an effective way to calculate dynamical observables of a quantum open system at its steady-state, which can be applied to various fields of non-equilibrium condensed matter physics.

  3. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. LINEAR ACCELERATOR

    Science.gov (United States)

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  5. Hierarchical models in the brain.

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2008-11-01

    Full Text Available This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

  6. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  7. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-26

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  8. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    Science.gov (United States)

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  9. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-24

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  10. How to calculate linear absorption spectra with lifetime broadening using fewest switches surface hopping trajectories: A simple generalization of ground-state Kubo theory

    International Nuclear Information System (INIS)

    Petit, Andrew S.; Subotnik, Joseph E.

    2014-01-01

    In this paper, we develop a surface hopping approach for calculating linear absorption spectra using ensembles of classical trajectories propagated on both the ground and excited potential energy surfaces. We demonstrate that our method allows the dipole-dipole correlation function to be determined exactly for the model problem of two shifted, uncoupled harmonic potentials with the same harmonic frequency. For systems where nonadiabatic dynamics and electronic relaxation are present, preliminary results show that our method produces spectra in better agreement with the results of exact quantum dynamics calculations than spectra obtained using the standard ground-state Kubo formalism. As such, our proposed surface hopping approach should find immediate use for modeling condensed phase spectra, especially for expensive calculations using ab initio potential energy surfaces

  11. A hierarchical procedure for calculation of risk importance measures

    International Nuclear Information System (INIS)

    Poern, K.; Dinsmore, S.C.

    1987-01-01

    Starting with a general importance definition based on conditional probabilities, a hierarchical process for calculating risk importance measures from a PSA's numerical results is developed. By the appropriate choice of events in the general definition, measures such as the risk achievement worth and the risk reduction worth can be calculated without requantifying the PSA's models. Required approximations are clearly defined and the subsequent constraints on the applicability of the process discussed. (orig.)

  12. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  13. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  14. Hierarchical Semantic Model of Geovideo

    Directory of Open Access Journals (Sweden)

    XIE Xiao

    2015-05-01

    Full Text Available The public security incidents were getting increasingly challenging with regard to their new features, including multi-scale mobility, multistage dynamic evolution, as well as spatiotemporal concurrency and uncertainty in the complex urban environment. However, the existing video models, which were used/designed for independent archive or local analysis of surveillance video, have seriously inhibited emergency response to the urgent requirements.Aiming at the explicit representation of change mechanism in video, the paper proposed a novel hierarchical geovideo semantic model using UML. This model was characterized by the hierarchical representation of both data structure and semantics based on the change-oriented three domains (feature domain, process domain and event domain instead of overall semantic description of video streaming; combining both geographical semantics and video content semantics, in support of global semantic association between multiple geovideo data. The public security incidents by video surveillance are inspected as an example to illustrate the validity of this model.

  15. Hybrid and hierarchical composite materials

    CERN Document Server

    Kim, Chang-Soo; Sano, Tomoko

    2015-01-01

    This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous  and detailed examples and over 150 illustrations.   In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.

  16. Hierarchical analysis of urban space

    OpenAIRE

    Kataeva, Y.

    2014-01-01

    Multi-level structure of urban space, multitude of subjects of its transformation, which follow asymmetric interests, multilevel system of institutions which regulate interaction in the "population business government -public organizations" system, determine the use of hierarchic approach to the analysis of urban space. The article observes theoretical justification of using this approach to study correlations and peculiarities of interaction in urban space as in an intricately organized syst...

  17. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  18. Linear Colliders

    International Nuclear Information System (INIS)

    Alcaraz, J.

    2001-01-01

    After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs

  19. Linear algebra

    CERN Document Server

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  20. Examining secular trends and seasonality in count data using dynamic generalized linear modelling: a new methodological approach illustrated with hospital discharge data on myocardial infarction.

    Science.gov (United States)

    Lundbye-Christensen, S; Dethlefsen, C; Gorst-Rasmussen, A; Fischer, T; Schønheyder, H C; Rothman, K J; Sørensen, H T

    2009-01-01

    Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation, and overdispersion. The model resembles an ordinary time series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. During the 1983-1999 period, 17,989 incidents of acute myocardial infarction were recorded in the Hospital Discharge Registry for the county of North Jutland, Denmark. Records were updated daily. A dynamic model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicated a good model fit. The analysis conducted with the dynamic model revealed peaks coinciding with above-average influenza A activity. On average the dynamic model estimated a higher peak-to-trough ratio than traditional models, and showed gradual changes in seasonal patterns. Analyses conducted with this model provide insights not available from more traditional approaches.

  1. Fluorocarbon Adsorption in Hierarchical Porous Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Motkuri, Radha K.; Annapureddy, Harsha V.; Vijayakumar, M.; Schaef, Herbert T.; Martin, P F.; McGrail, B. Peter; Dang, Liem X.; Krishna, Rajamani; Thallapally, Praveen K.

    2014-07-09

    The adsorption behavior of a series of fluorocarbon derivatives was examined on a set of microporous metal organic framework (MOF) sorbents and another set of hierarchical mesoporous MOFs. The microporous M-DOBDC (M = Ni, Co) showed a saturation uptake capacity for R12 of over 4 mmol/g at a very low relative saturation pressure (P/Po) of 0.02. In contrast, the mesoporous MOF MIL-101 showed an exceptionally high uptake capacity reaching over 14 mmol/g at P/Po of 0.4. Adsorption affinity in terms of mass loading and isosteric heats of adsorption were found to generally correlate with the polarizability of the refrigerant with R12 > R22 > R13 > R14 > methane. These results suggest the possibility of exploiting MOFs for separation of azeotropic mixtures of fluorocarbons and use in eco-friendly fluorocarbon-based adsorption cooling and refrigeration applications.

  2. Collective processes in a tokamak with high-energy particles: general problems of the linear theory of Alfven instabilities of a tokamak with high-energy ions

    International Nuclear Information System (INIS)

    Mikhailovskii, A.B.

    1986-01-01

    Some general problems of the theory of Alfven instabilities of a tokamak with high-energy ions are considered. It is assumed that such ions are due to either ionization of fast neutral atoms, injected into the tokamak, or production of them under thermo-nuclear conditions. Small-oscillation equations are derived for the Alfven-type waves, which allow for both destabilizing effects, associated with the high-energy particles, and stabilizing ones, such as effects of shear and bulk-plasm dissipation. A high-energy ion contribution is calculated into the growth rate of the Alfven waves. The author considers the role of trapped-electron collisional dissipation

  3. Hierarchical structure of biological systems: a bioengineering approach.

    Science.gov (United States)

    Alcocer-Cuarón, Carlos; Rivera, Ana L; Castaño, Victor M

    2014-01-01

    A general theory of biological systems, based on few fundamental propositions, allows a generalization of both Wierner and Berthalanffy approaches to theoretical biology. Here, a biological system is defined as a set of self-organized, differentiated elements that interact pair-wise through various networks and media, isolated from other sets by boundaries. Their relation to other systems can be described as a closed loop in a steady-state, which leads to a hierarchical structure and functioning of the biological system. Our thermodynamical approach of hierarchical character can be applied to biological systems of varying sizes through some general principles, based on the exchange of energy information and/or mass from and within the systems.

  4. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  5. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  6. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  7. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    Science.gov (United States)

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  8. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    Directory of Open Access Journals (Sweden)

    Christophe Coupé

    2018-04-01

    Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships

  9. General theory for multiple input-output perturbations in complex molecular systems. 1. Linear QSPR electronegativity models in physical, organic, and medicinal chemistry.

    Science.gov (United States)

    González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature

  10. Linear programming

    CERN Document Server

    Karloff, Howard

    1991-01-01

    To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...

  11. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  12. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  14. Hierarchal scalar and vector tetrahedra

    International Nuclear Information System (INIS)

    Webb, J.P.; Forghani, B.

    1993-01-01

    A new set of scalar and vector tetrahedral finite elements are presented. The elements are hierarchal, allowing mixing of polynomial orders; scalar orders up to 3 and vector orders up to 2 are defined. The vector elements impose tangential continuity on the field but not normal continuity, making them suitable for representing the vector electric or magnetic field. Further, the scalar and vector elements are such that they can easily be used in the same mesh, a requirement of many quasi-static formulations. Results are presented for two 50 Hz problems: the Bath Cube, and TEAM Problem 7

  15. Theory of linear operations

    CERN Document Server

    Banach, S

    1987-01-01

    This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.

  16. America, Linearly Cyclical

    Science.gov (United States)

    2013-05-10

    AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought

  17. Scalable Hierarchical Algorithms for stochastic PDEs and UQ

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by Kriemann [1,2]. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

  18. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R. Kriemann, 2005. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

  19. Loops in hierarchical channel networks

    Science.gov (United States)

    Katifori, Eleni; Magnasco, Marcelo

    2012-02-01

    Nature provides us with many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture. Although a number of methods have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated and natural graphs extracted from digitized images of dicotyledonous leaves and animal vasculature. We calculate various metrics on the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  20. Hierarchically nested river landform sequences

    Science.gov (United States)

    Pasternack, G. B.; Weber, M. D.; Brown, R. A.; Baig, D.

    2017-12-01

    River corridors exhibit landforms nested within landforms repeatedly down spatial scales. In this study we developed, tested, and implemented a new way to create river classifications by mapping domains of fluvial processes with respect to the hierarchical organization of topographic complexity that drives fluvial dynamism. We tested this approach on flow convergence routing, a morphodynamic mechanism with different states depending on the structure of nondimensional topographic variability. Five nondimensional landform types with unique functionality (nozzle, wide bar, normal channel, constricted pool, and oversized) represent this process at any flow. When this typology is nested at base flow, bankfull, and floodprone scales it creates a system with up to 125 functional types. This shows how a single mechanism produces complex dynamism via nesting. Given the classification, we answered nine specific scientific questions to investigate the abundance, sequencing, and hierarchical nesting of these new landform types using a 35-km gravel/cobble river segment of the Yuba River in California. The nested structure of flow convergence routing landforms found in this study revealed that bankfull landforms are nested within specific floodprone valley landform types, and these types control bankfull morphodynamics during moderate to large floods. As a result, this study calls into question the prevailing theory that the bankfull channel of a gravel/cobble river is controlled by in-channel, bankfull, and/or small flood flows. Such flows are too small to initiate widespread sediment transport in a gravel/cobble river with topographic complexity.

  1. Stability of glassy hierarchical networks

    Science.gov (United States)

    Zamani, M.; Camargo-Forero, L.; Vicsek, T.

    2018-02-01

    The structure of interactions in most animal and human societies can be best represented by complex hierarchical networks. In order to maintain close-to-optimal function both stability and adaptability are necessary. Here we investigate the stability of hierarchical networks that emerge from the simulations of an organization type with an efficiency function reminiscent of the Hamiltonian of spin glasses. Using this quantitative approach we find a number of expected (from everyday observations) and highly non-trivial results for the obtained locally optimal networks, including, for example: (i) stability increases with growing efficiency and level of hierarchy; (ii) the same perturbation results in a larger change for more efficient states; (iii) networks with a lower level of hierarchy become more efficient after perturbation; (iv) due to the huge number of possible optimal states only a small fraction of them exhibit resilience and, finally, (v) ‘attacks’ targeting the nodes selectively (regarding their position in the hierarchy) can result in paradoxical outcomes.

  2. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Taya, Minoru

    2003-01-01

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  3. Hierarchical organisation of causal graphs

    International Nuclear Information System (INIS)

    Dziopa, P.

    1993-01-01

    This paper deals with the design of a supervision system using a hierarchy of models formed by graphs, in which the variables are the nodes and the causal relations between the variables of the arcs. To obtain a representation of the variables evolutions which contains only the relevant features of their real evolutions, the causal relations are completed with qualitative transfer functions (QTFs) which produce roughly the behaviour of the classical transfer functions. Major improvements have been made in the building of the hierarchical organization. First, the basic variables of the uppermost level and the causal relations between them are chosen. The next graph is built by adding intermediary variables to the upper graph. When the undermost graph has been built, the transfer functions parameters corresponding to its causal relations are identified. The second task consists in the upwelling of the information from the undermost graph to the uppermost one. A fusion procedure of the causal relations has been designed to compute the QFTs relevant for each level. This procedure aims to reduce the number of parameters needed to represent an evolution at a high level of abstraction. These techniques have been applied to the hierarchical modelling of nuclear process. (authors). 8 refs., 12 figs

  4. Hybrid Steepest-Descent Methods for Triple Hierarchical Variational Inequalities

    Directory of Open Access Journals (Sweden)

    L. C. Ceng

    2015-01-01

    Full Text Available We introduce and analyze a relaxed iterative algorithm by combining Korpelevich’s extragradient method, hybrid steepest-descent method, and Mann’s iteration method. We prove that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inclusions, and the solution set of general system of variational inequalities (GSVI, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm for solving a hierarchical variational inequality problem with constraints of finitely many GMEPs, finitely many variational inclusions, and the GSVI. The results obtained in this paper improve and extend the corresponding results announced by many others.

  5. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Hierarchical trigger of the ALICE calorimeters

    CERN Document Server

    Muller, Hans; Novitzky, Norbert; Kral, Jiri; Rak, Jan; Schambach, Joachim; Wang, Ya-Ping; Wang, Dong; Zhou, Daicui

    2010-01-01

    The trigger of the ALICE electromagnetic calorimeters is implemented in 2 hierarchically connected layers of electronics. In the lower layer, level-0 algorithms search shower energy above threshold in locally confined Trigger Region Units (TRU). The top layer is implemented as a single, global trigger unit that receives the trigger data from all TRUs as input to the level-1 algorithm. This architecture was first developed for the PHOS high pT photon trigger before it was adopted by EMCal also for the jet trigger. TRU units digitize up to 112 analogue input signals from the Front End Electronics (FEE) and concentrate their digital stream in a single FPGA. A charge and time summing algorithm is combined with a peakfinder that suppresses spurious noise and is precise to single LHC bunches. With a peak-to-peak noise level of 150 MeV the linear dynamic range above threshold spans from MIP energies at 215 up to 50 GeV. Local level-0 decisions take less than 600 ns after LHC collisions, upon which all TRUs transfer ...

  7. Survival of contact processes on the hierarchical group

    Czech Academy of Sciences Publication Activity Database

    Athreya, S.R.; Swart, Jan M.

    2010-01-01

    Roč. 147, č. 3 (2010), s. 529-563 ISSN 0178-8051 R&D Projects: GA ČR GA201/06/1323 Institutional research plan: CEZ:AV0Z10750506 Keywords : contact process * survival * hierarchical group * coupling * renormalization group Subject RIV: BA - General Mathematics Impact factor: 1.590, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/swart-0342729.pdf

  8. A Simple Hierarchical Pooling Data Structure for Loop Closure

    Science.gov (United States)

    2016-10-16

    performance empirically on the KITTI [9], Oxford [6] and TUM RGB- D [29] datasets, as well as demonstrate extensions to general image retrieval on the...of a BoW where each word is an element of a dictionary of descriptors obtained off-line by hierarchical k-means clustering, with each word weighted by...to the inverse docu- ment frequency. This standard pipeline, with different clustering procedures to generate the dictionary and different features

  9. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  10. Dynamics and thermodynamics in hierarchically organized systems applications in physics, biology and economics

    CERN Document Server

    Auger, P

    2013-01-01

    One of the most fundamental and efficient ways of conceptualizing complex systems is to organize them hierarchically. A hierarchically organized system is represented by a network of interconnected subsystems, each of which has its own network of subsystems, and so on, until some elementary subsystems are reached that are not further decomposed. This original and important book proposes a general mathematical theory of a hierarchical system and shows how it can be applied to very different topics such as physics (Hamiltonian systems), biology (coupling the molecular and the cellular levels), e

  11. Linear and non-linear optics of condensed matter

    International Nuclear Information System (INIS)

    McLean, T.P.

    1977-01-01

    Part I - Linear optics: 1. General introduction. 2. Frequency dependence of epsilon(ω, k vector). 3. Wave-vector dependence of epsilon(ω, k vector). 4. Tensor character of epsilon(ω, k vector). Part II - Non-linear optics: 5. Introduction. 6. A classical theory of non-linear response in one dimension. 7. The generalization to three dimensions. 8. General properties of the polarizability tensors. 9. The phase-matching condition. 10. Propagation in a non-linear dielectric. 11. Second harmonic generation. 12. Coupling of three waves. 13. Materials and their non-linearities. 14. Processes involving energy exchange with the medium. 15. Two-photon absorption. 16. Stimulated Raman effect. 17. Electro-optic effects. 18. Limitations of the approach presented here. (author)

  12. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  13. Hierarchical Control for Smart Grids

    DEFF Research Database (Denmark)

    Trangbæk, K; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2011-01-01

    of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the flexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... on one hand from varying consumption, and on the other hand by natural variations in power production e.g. from wind turbines. The high-level MPC problem is solved using quadratic optimisation, while the aggregator level can either involve quadratic optimisation or simple sorting-based min-max solutions...

  14. Silver Films with Hierarchical Chirality.

    Science.gov (United States)

    Ma, Liguo; Cao, Yuanyuan; Duan, Yingying; Han, Lu; Che, Shunai

    2017-07-17

    Physical fabrication of chiral metallic films usually results in singular or large-sized chirality, restricting the optical asymmetric responses to long electromagnetic wavelengths. The chiral molecule-induced formation of silver films prepared chemically on a copper substrate through a redox reaction is presented. Three levels of chirality were identified: primary twisted nanoflakes with atomic crystal lattices, secondary helical stacking of these nanoflakes to form nanoplates, and tertiary micrometer-sized circinates consisting of chiral arranged nanoplates. The chiral Ag films exhibited multiple plasmonic absorption- and scattering-based optical activities at UV/Vis wavelengths based on their hierarchical chirality. The Ag films showed chiral selectivity for amino acids in catalytic electrochemical reactions, which originated from their primary atomic crystal lattices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Hierarchical coarse-graining transform.

    Science.gov (United States)

    Pancaldi, Vera; King, Peter R; Christensen, Kim

    2009-03-01

    We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.

  16. Adaptive hierarchical multi-agent organizations

    NARCIS (Netherlands)

    Ghijsen, M.; Jansweijer, W.N.H.; Wielinga, B.J.; Babuška, R.; Groen, F.C.A.

    2010-01-01

    In this chapter, we discuss the design of adaptive hierarchical organizations for multi-agent systems (MAS). Hierarchical organizations have a number of advantages such as their ability to handle complex problems and their scalability to large organizations. By introducing adaptivity in the

  17. The Case for a Hierarchical Cosmology

    Science.gov (United States)

    Vaucouleurs, G. de

    1970-01-01

    The development of modern theoretical cosmology is presented and some questionable assumptions of orthodox cosmology are pointed out. Suggests that recent observations indicate that hierarchical clustering is a basic factor in cosmology. The implications of hierarchical models of the universe are considered. Bibliography. (LC)

  18. Discovering hierarchical structure in normal relational data

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten

    2014-01-01

    -parametric generative model for hierarchical clustering of similarity based on multifurcating Gibbs fragmentation trees. This allows us to infer and display the posterior distribution of hierarchical structures that comply with the data. We demonstrate the utility of our method on synthetic data and data of functional...

  19. Biased trapping issue on weighted hierarchical networks

    Indian Academy of Sciences (India)

    archical networks which are based on the classic scale-free hierarchical networks. ... Weighted hierarchical networks; weight-dependent walks; mean first passage ..... The weighted networks can mimic some real-world natural and social systems to ... the Priority Academic Program Development of Jiangsu Higher Education ...

  20. Plant functional group classifications and a generalized hierarchical ...

    African Journals Online (AJOL)

    Yomi

    2010-12-27

    Dec 27, 2010 ... functional traits ranging from the molecular to the biospherical level, and operating on ... Many researchers have discussed landscape dynamics ... concept groups plant species into distinct clusters accor- ..... simulations. Ecol.

  1. Numerically Optimized Uniformly Most Powerful Alphabets for Hierarchical-Decode-and-Forward Two-Way Relaying

    Directory of Open Access Journals (Sweden)

    M. Hekrdla

    2011-01-01

    Full Text Available We address the issue of the parametric performance of the Hierarchical-Decode-and-Forward (HDF strategy in a wireless 2-way relay channel. Promising HDF, representing the concept of wireless network coding, performs well with a pre-coding strategy that requires Channel State Information (CSI on the transceiver side. Assuming a practical case when CSI is available only on the receiver side and the channel conditions do not allow adaptive strategies, the parametrization causes significant HDF performance degradation for some modulation alphabets. Alphabets that are robust to the parametrization (denoted Uniformly Most Powerful (UMP have already been proposed restricting on the class of non-linear multi-dimensional frequency modulations. In this work, we focus on the general design of unrestricted UMP alphabets. We formulate an optimization problem which is solved by standard non-linear convex constrained optimization algorithms, particularly by Nelder-Mead global optimization search, which is further refined by the local interior-pointsmethod.

  2. Characterization of projective general linear groups

    Directory of Open Access Journals (Sweden)

    Alireza Khalili Asboei

    2016-03-01

    Full Text Available Let $G$ be a finite group and $pi_{e}(G$ be the set of element orders of $G $. Let $k in pi_{e}(G$ and $s_{k}$ be the number of elements of order $k $ in $G$. Set nse($G$:=${ s_{k} | k in pi_{e}(G}$. In this paper, it is proved if $|G|=|$ PGL$_{2}(q|$, where $q$ is odd prime power and nse$(G= $nse$($PGL$_{2}(q$, then $G cong $PGL$_

  3. Using generalized linear (mixed) models in HCI

    NARCIS (Netherlands)

    Kaptein, M.C.; Robertson, J; Kaptein, M

    2016-01-01

    In HCI we often encounter dependent variables which are not (conditionally) normally distributed: we measure response-times, mouse-clicks, or the number of dialog steps it took a user to complete a task. Furthermore, we often encounter nested or grouped data; users are grouped within companies or

  4. Binar Sort: A Linear Generalized Sorting Algorithm

    OpenAIRE

    Gilreath, William F.

    2008-01-01

    Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm...

  5. Linear Programming Problems for Generalized Uncertainty

    Science.gov (United States)

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  6. Hierarchically Nanostructured Materials for Sustainable Environmental Applications

    Directory of Open Access Journals (Sweden)

    Zheng eRen

    2013-11-01

    Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

  7. Processing of hierarchical syntactic structure in music.

    Science.gov (United States)

    Koelsch, Stefan; Rohrmeier, Martin; Torrecuso, Renzo; Jentschke, Sebastian

    2013-09-17

    Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions in which the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with long-distance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.

  8. A Hierarchical Algorithm for Integrated Scheduling and Control With Applications to Power Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dinesen, Peter Juhler; Jørgensen, John Bagterp

    2016-01-01

    The contribution of this paper is a hierarchical algorithm for integrated scheduling and control via model predictive control of hybrid systems. The controlled system is a linear system composed of continuous control, state, and output variables. Binary variables occur as scheduling decisions in ...

  9. A Hierarchical FEM approach for Simulation of Geometrical and Material induced Instability of Composite Structures

    DEFF Research Database (Denmark)

    Hansen, Anders L.; Lund, Erik; Pinho, Silvestre T.

    2009-01-01

    In this paper a hierarchical FE approach is utilized to simulate delamination in a composite plate loaded in uni-axial compression. Progressive delamination is modelled by use of cohesive interface elements that are automatically embedded. The non-linear problem is solved quasi-statically in whic...

  10. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  11. Minimax approach problem with incomplete information for the two-level hierarchical discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F. [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia and Institute of Mathematics and Mechanics, Ural Division of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2014-11-18

    We consider a discrete-time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector linear or convex discrete-time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solution.

  12. Dynamic Linear Models with R

    CERN Document Server

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  13. Further linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Most of the introductory courses on linear algebra develop the basic theory of finite­ dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num­ ber of illustrative and worked examples, as well as many exercises that are strategi­ cally placed throughout the text. Solutions to the ex...

  14. Nonlinear robust hierarchical control for nonlinear uncertain systems

    Directory of Open Access Journals (Sweden)

    Leonessa Alexander

    1999-01-01

    Full Text Available A nonlinear robust control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving nominal system equilibria is developed. Specifically, using equilibria-dependent Lyapunov functions, a hierarchical nonlinear robust control strategy is developed that robustly stabilizes a given nonlinear system over a prescribed range of system uncertainty by robustly stabilizing a collection of nonlinear controlled uncertain subsystems. The robust switching nonlinear controller architecture is designed based on a generalized (lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized nominal system equilibria. The proposed framework robustly stabilizes a compact positively invariant set of a given nonlinear uncertain dynamical system with structured parametric uncertainty. Finally, the efficacy of the proposed approach is demonstrated on a jet engine propulsion control problem with uncertain pressure-flow map data.

  15. Poincaré Embeddings for Learning Hierarchical Representations

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Abstracts: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically do not account for this property. In this talk, I will discuss a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.      &...

  16. Dynamic control of quadruped robot with hierarchical control structure

    International Nuclear Information System (INIS)

    Wang, Yu-Zhang; Furusho, Junji; Okajima, Yosuke.

    1988-01-01

    For moving on irregular terrain, such as the inside of a nuclear power plant and outer space, it is generally recognized that the multilegged walking robot is suitable. This paper proposes a hierarchical control structure for the dynamic control of quadruped walking robots. For this purpose, we present a reduced order model which can approximate the original higher order model very well. Since this reduced order model does not require much computational time, it can be used in the real-time control of a quadruped walking robot. A hierarchical control experiment is shown in which the optimal control algorithm using a reduced order model is calculated by one microprocessor, and the other control algorithm is calculated by another microprocessor. (author)

  17. TiO2 nanowire-templated hierarchical nanowire network as water-repelling coating

    Science.gov (United States)

    Hang, Tian; Chen, Hui-Jiuan; Xiao, Shuai; Yang, Chengduan; Chen, Meiwan; Tao, Jun; Shieh, Han-ping; Yang, Bo-ru; Liu, Chuan; Xie, Xi

    2017-12-01

    Extraordinary water-repelling properties of superhydrophobic surfaces make them novel candidates for a great variety of potential applications. A general approach to achieve superhydrophobicity requires low-energy coating on the surface and roughness on nano- and micrometre scale. However, typical construction of superhydrophobic surfaces with micro-nano structure through top-down fabrication is restricted by sophisticated fabrication techniques and limited choices of substrate materials. Micro-nanoscale topographies templated by conventional microparticles through surface coating may produce large variations in roughness and uncontrollable defects, resulting in poorly controlled surface morphology and wettability. In this work, micro-nanoscale hierarchical nanowire network was fabricated to construct self-cleaning coating using one-dimensional TiO2 nanowires as microscale templates. Hierarchical structure with homogeneous morphology was achieved by branching ZnO nanowires on the TiO2 nanowire backbones through hydrothermal reaction. The hierarchical nanowire network displayed homogeneous micro/nano-topography, in contrast to hierarchical structure templated by traditional microparticles. This hierarchical nanowire network film exhibited high repellency to both water and cell culture medium after functionalization with fluorinated organic molecules. The hierarchical structure templated by TiO2 nanowire coating significantly increased the surface superhydrophobicity compared to vertical ZnO nanowires with nanotopography alone. Our results demonstrated a promising strategy of using nanowires as microscale templates for the rational design of hierarchical coatings with desired superhydrophobicity that can also be applied to various substrate materials.

  18. Object recognition with hierarchical discriminant saliency networks

    Directory of Open Access Journals (Sweden)

    Sunhyoung eHan

    2014-09-01

    Full Text Available The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognitionmodel, the hierarchical discriminant saliency network (HDSN, whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. The HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a neuralnetwork implementation, all layers are convolutional and implement acombination of filtering, rectification, and pooling. The rectificationis performed with a parametric extension of the now popular rectified linearunits (ReLUs, whose parameters can be tuned for the detection of targetobject classes. This enables a number of functional enhancementsover neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation ofsaliency responses by the discriminant power of the underlying features,and the ability to detect both feature presence and absence.In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity totarget object classes and invariance. The resulting performance demonstrates benefits for all the functional enhancements of the HDSN.

  19. Star Cluster Structure from Hierarchical Star Formation

    Science.gov (United States)

    Grudic, Michael; Hopkins, Philip; Murray, Norman; Lamberts, Astrid; Guszejnov, David; Schmitz, Denise; Boylan-Kolchin, Michael

    2018-01-01

    Young massive star clusters (YMCs) spanning 104-108 M⊙ in mass generally have similar radial surface density profiles, with an outer power-law index typically between -2 and -3. This similarity suggests that they are shaped by scale-free physics at formation. Recent multi-physics MHD simulations of YMC formation have also produced populations of YMCs with this type of surface density profile, allowing us to narrow down the physics necessary to form a YMC with properties as observed. We show that the shallow density profiles of YMCs are a natural result of phase-space mixing that occurs as they assemble from the clumpy, hierarchically-clustered configuration imprinted by the star formation process. We develop physical intuition for this process via analytic arguments and collisionless N-body experiments, elucidating the connection between star formation physics and star cluster structure. This has implications for the early-time structure and evolution of proto-globular clusters, and prospects for simulating their formation in the FIRE cosmological zoom-in simulations.

  20. GEODESIC RECONSTRUCTION, SADDLE ZONES & HIERARCHICAL SEGMENTATION

    Directory of Open Access Journals (Sweden)

    Serge Beucher

    2011-05-01

    Full Text Available The morphological reconstruction based on geodesic operators, is a powerful tool in mathematical morphology. The general definition of this reconstruction supposes the use of a marker function f which is not necessarily related to the function g to be built. However, this paper deals with operations where the marker function is defined from given characteristic regions of the initial function f, as it is the case, for instance, for the extrema (maxima or minima but also for the saddle zones. Firstly, we show that the intuitive definition of a saddle zone is not easy to handle, especially when digitised images are involved. However, some of these saddle zones (regional ones also called overflow zones can be defined, this definition providing a simple algorithm to extract them. The second part of the paper is devoted to the use of these overflow zones as markers in image reconstruction. This reconstruction provides a new function which exhibits a new hierarchy of extrema. This hierarchy is equivalent to the hierarchy produced by the so-called waterfall algorithm. We explain why the waterfall algorithm can be achieved by performing a watershed transform of the function reconstructed by its initial watershed lines. Finally, some examples of use of this hierarchical segmentation are described.