DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Short communication: Alteration of priors for random effects in Gaussian linear mixed model
DEFF Research Database (Denmark)
Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas
2014-01-01
such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Directory of Open Access Journals (Sweden)
Crasmareanu Mircea
2017-12-01
Full Text Available We consider the paracomplex version of the notion of mixed linear spaces introduced by M. Jurchescu in [4] by replacing the complex unit i with the paracomplex unit j, j2 = 1. The linear algebra of these spaces is studied with a special view towards their morphisms.
Random linear codes in steganography
Directory of Open Access Journals (Sweden)
Kamil Kaczyński
2016-12-01
Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Model Selection with the Linear Mixed Model for Longitudinal Data
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Squares of Random Linear Codes
DEFF Research Database (Denmark)
Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego
2015-01-01
a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...
Linear mixed models in sensometrics
DEFF Research Database (Denmark)
Kuznetsova, Alexandra
quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...
Linear mixed models for longitudinal data
Molenberghs, Geert
2000-01-01
This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...
Estimation and Inference for Very Large Linear Mixed Effects Models
Gao, K.; Owen, A. B.
2016-01-01
Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...
The RANDOM computer program: A linear congruential random number generator
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Best linear decoding of random mask images
International Nuclear Information System (INIS)
Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.
1975-01-01
In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)
Decoding Algorithms for Random Linear Network Codes
DEFF Research Database (Denmark)
Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2014-01-01
Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
A property of assignment type mixed integer linear programming problems
Benders, J.F.; van Nunen, J.A.E.E.
1982-01-01
In this paper we will proof that rather tight upper bounds can be given for the number of non-unique assignments that are achieved after solving the linear programming relaxation of some types of mixed integer linear assignment problems. Since in these cases the number of splitted assignments is
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2006-01-01
Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
A mixed integer linear program for an integrated fishery | Hasan ...
African Journals Online (AJOL)
... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Random effect selection in generalised linear models
DEFF Research Database (Denmark)
Denwood, Matt; Houe, Hans; Forkman, Björn
We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Application of laser speckle to randomized numerical linear algebra
Valley, George C.; Shaw, Thomas J.; Stapleton, Andrew D.; Scofield, Adam C.; Sefler, George A.; Johannson, Leif
2018-02-01
We propose and simulate integrated optical devices for accelerating numerical linear algebra (NLA) calculations. Data is modulated on chirped optical pulses and these propagate through a multimode waveguide where speckle provides the random projections needed for NLA dimensionality reduction.
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Linear mixing model applied to AVHRR LAC data
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Systematic analysis of the impact of mixing locality on Mixing-DAC linearity for multicarrier GSM
Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; Roermund, van A.H.M.
2012-01-01
In an RF transmitter, the function of the mixer and the DAC can be combined in a single block: the Mixing-DAC. For the generation of multicarrier GSM signals in a basestation, high dynamic linearity is required, i.e. SFDR>85dBc, at high output signal frequency, i.e. ƒout ˜ 4GHz. This represents a
Linear models for sound from supersonic reacting mixing layers
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
Mixed integer linear programming model for dynamic supplier selection problem considering discounts
Directory of Open Access Journals (Sweden)
Adi Wicaksono Purnawan
2018-01-01
Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.
Linear mixing model applied to coarse resolution satellite data
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Random linear network coding for streams with unequally sized packets
DEFF Research Database (Denmark)
Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk
2016-01-01
State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However, ...
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
DEFF Research Database (Denmark)
Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.
2017-01-01
Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Reliability of Broadcast Communications Under Sparse Random Linear Network Coding
Brown, Suzie; Johnson, Oliver; Tassi, Andrea
2018-01-01
Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kin...
DEFF Research Database (Denmark)
Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik
2004-01-01
The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
Robust linear registration of CT images using random regression forests
Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan
2011-03-01
Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Pseudo-random number generator based on mixing of three chaotic maps
François, M.; Grosges, T.; Barchiesi, D.; Erra, R.
2014-04-01
A secure pseudo-random number generator three-mixer is proposed. The principle of the method consists in mixing three chaotic maps produced from an input initial vector. The algorithm uses permutations whose positions are computed and indexed by a standard chaotic function and a linear congruence. The performance of that scheme is evaluated through statistical analysis. Such a cryptosystem lets appear significant cryptographic qualities for a high security level.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
Stability and complexity of small random linear systems
Hastings, Harold
2010-03-01
We explore the stability of the small random linear systems, typically involving 10-20 variables, motivated by dynamics of the world trade network and the US and Canadian power grid. This report was prepared as an account of work sponsored by an agency of the US Government. Neither the US Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the US Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the US Government or any agency thereof.
Random Linear Network Coding for 5G Mobile Video Delivery
Directory of Open Access Journals (Sweden)
Dejan Vukobratovic
2018-03-01
Full Text Available An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G 3GPP New Radio (NR standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC. In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.
DEFF Research Database (Denmark)
Köyluoglu, H.U.; Nielsen, Søren R.K.; Cakmak, A.S.
1994-01-01
perturbation method using stochastic differential equations. The joint statistical moments entering the perturbation solution are determined by considering an augmented dynamic system with state variables made up of the displacement and velocity vector and their first and second derivatives with respect......The paper deals with the first and second order statistical moments of the response of linear systems with random parameters subject to random excitation modelled as white-noise multiplied by an envelope function with random parameters. The method of analysis is basically a second order...... to the random parameters of the problem. Equations for partial derivatives are obtained from the partial differentiation of the equations of motion. The zero time-lag joint statistical moment equations for the augmented state vector are derived from the Itô differential formula. General formulation is given...
Mixed-Integer Conic Linear Programming: Challenges and Perspectives
2013-10-01
The novel DCCs for MISOCO may be used in branch- and-cut algorithms when solving MISOCO problems. The experimental software CICLO was developed to...perform limited, but rigorous computational experiments. The CICLO solver utilizes continuous SOCO solvers, MOSEK, CPLES or SeDuMi, builds on the open...submitted Fall 2013. Software: 1. CICLO : Integer conic linear optimization package. Authors: J.C. Góez, T.K. Ralphs, Y. Fu, and T. Terlaky
Delta-tilde interpretation of standard linear mixed model results
DEFF Research Database (Denmark)
Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra
2016-01-01
effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
A Linear Mixed-Effects Model of Wireless Spectrum Occupancy
Directory of Open Access Journals (Sweden)
Pagadarai Srikanth
2010-01-01
Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Bamia, Christina; White, Ian R; Kenward, Michael G
2013-07-10
Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Directory of Open Access Journals (Sweden)
Hongchun Sun
2012-01-01
Full Text Available For the extended mixed linear complementarity problem (EML CP, we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2014-01-01
In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
International Nuclear Information System (INIS)
Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
N. Mielenz
2015-01-01
Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
DEFF Research Database (Denmark)
Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab
2017-01-01
of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...
Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms
International Nuclear Information System (INIS)
Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.
2008-01-01
This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Diffusion in the kicked quantum rotator by random corrections to a linear and sine field
International Nuclear Information System (INIS)
Hilke, M.; Flores, J.C.
1992-01-01
We discuss the diffusion in momentum space, of the kicked quantum rotator, by introducing random corrections to a linear and sine external field. For the linear field we obtain a linear diffusion behavior identical to the case with zero average in the external field. But for the sine field, accelerator modes with quadratic diffusion are found for particular values of the kicking period. (orig.)
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Speed Sensorless mixed sensitivity linear parameter variant H_inf control of the induction motor
Toth, R.; Fodor, D.
2004-01-01
The paper shows the design of a robust control structure for the speed sensorless vector control of the IM, based on the mixed sensitivity (MS) linear parameter variant (LPV) H8 control theory. The controller makes possible the direct control of the flux and speed of the motor with torque adaptation
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR
Lemmen-Gerdessen, van J.C.; Souverein, O.W.; Veer, van 't P.; Vries, de J.H.M.
2015-01-01
Objective To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Design Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
DEFF Research Database (Denmark)
Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises
2017-01-01
-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
A novel mixed-synchronization phenomenon in coupled Chua's circuits via non-fragile linear control
International Nuclear Information System (INIS)
Wang Jun-Wei; Ma Qing-Hua; Zeng Li
2011-01-01
Dynamical variables of coupled nonlinear oscillators can exhibit different synchronization patterns depending on the designed coupling scheme. In this paper, a non-fragile linear feedback control strategy with multiplicative controller gain uncertainties is proposed for realizing the mixed-synchronization of Chua's circuits connected in a drive-response configuration. In particular, in the mixed-synchronization regime, different state variables of the response system can evolve into complete synchronization, anti-synchronization and even amplitude death simultaneously with the drive variables for an appropriate choice of scaling matrix. Using Lyapunov stability theory, we derive some sufficient criteria for achieving global mixed-synchronization. It is shown that the desired non-fragile state feedback controller can be constructed by solving a set of linear matrix inequalities (LMIs). Numerical simulations are also provided to demonstrate the effectiveness of the proposed control approach. (general)
Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach
Thomas, C.; Lark, R. M.
2013-12-01
Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second
Linear and Weakly Nonlinear Instability of Shallow Mixing Layers with Variable Friction
Directory of Open Access Journals (Sweden)
Irina Eglite
2018-01-01
Full Text Available Linear and weakly nonlinear instability of shallow mixing layers is analysed in the present paper. It is assumed that the resistance force varies in the transverse direction. Linear stability problem is solved numerically using collocation method. It is shown that the increase in the ratio of the friction coefficients in the main channel to that in the floodplain has a stabilizing influence on the flow. The amplitude evolution equation for the most unstable mode (the complex Ginzburg–Landau equation is derived from the shallow water equations under the rigid-lid assumption. Results of numerical calculations are presented.
Linear dose dependence of ion beam mixing of metals on Si
International Nuclear Information System (INIS)
Poker, D.B.; Appleton, B.R.
1985-01-01
These experiments were conducted to determine the dose dependences of ion beam mixing of various metal-silicon couples. V/Si and Cr/Si were included because these couples were previously suspected of exhibiting a linear dose dependence. Pd/Si was chosen because it had been reported as exhibiting only the square root dependence. Samples were cut from wafers of (100) n-type Si. The samples were cleaned in organic solvents, etched in hydrofluoric acid, and rinsed with methanol before mounting in an oil-free vacuum system for thin-film deposition. Films of Au, V, Cr, or Pd were evaporated onto the Si samples with a nominal deposition rate of 10 A/s. The thicknesses were large compared with those usually used to measure ion beam mixing and were used to ensure that conditions of unlimited supply were met. Samples were mixed with Si ions ranging in energy from 300 to 375 keV, chosen to produce ion ranges that significantly exceeded the metal film depth. Si was used as the mixing ion to prevent impurity doping of the Si substrate and to exclude a background signal from the Rutherford backscattering (RBS) spectra. Samples were mixed at room temperature, with the exception of the Au/Si samples, which were mixed at liquid nitrogen temperature. The samples were alternately mixed and analyzed in situ without exposure to atmosphere between mixing doses. The compositional distributions after mixing were measured using RBS of 2.5-MeV 4 He atoms
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes
Mixed problems for linear symmetric hyperbolic systems with characteristic boundary conditions
International Nuclear Information System (INIS)
Secchi, P.
1994-01-01
We consider the initial-boundary value problem for symmetric hyperbolic systems with characteristic boundary of constant multiplicity. In the linear case we give some results about the existence of regular solutions in suitable functions spaces which take in account the loss of regularity in the normal direction to the characteristic boundary. We also consider the equations of ideal magneto-hydrodynamics under perfectly conducting wall boundary conditions and give some results about the solvability of such mixed problem. (author). 16 refs
Linear mixed-effects models for central statistical monitoring of multicenter clinical trials
Desmet, L.; Venet, D.; Doffagne, E.; Timmermans, C.; BURZYKOWSKI, Tomasz; LEGRAND, Catherine; BUYSE, Marc
2014-01-01
Multicenter studies are widely used to meet accrual targets in clinical trials. Clinical data monitoring is required to ensure the quality and validity of the data gathered across centers. One approach to this end is central statistical monitoring, which aims at detecting atypical patterns in the data by means of statistical methods. In this context, we consider the simple case of a continuous variable, and we propose a detection procedure based on a linear mixed-effects model to detect locat...
A Mixed Integer Linear Programming Model for the North Atlantic Aircraft Trajectory Planning
Sbihi , Mohammed; Rodionova , Olga; Delahaye , Daniel; Mongeau , Marcel
2015-01-01
International audience; This paper discusses the trajectory planning problem for ights in the North Atlantic oceanic airspace (NAT). We develop a mathematical optimization framework in view of better utilizing available capacity by re-routing aircraft. The model is constructed by discretizing the problem parameters. A Mixed integer linear program (MILP) is proposed. Based on the MILP a heuristic to solve real-size instances is also introduced
DEFF Research Database (Denmark)
Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero
2015-01-01
In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...
Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril
Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.
Mixed H∞ and passive control for linear switched systems via hybrid control approach
Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin
2018-03-01
This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.
Morphology and linear-elastic moduli of random network solids.
Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E
2011-06-17
The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Directory of Open Access Journals (Sweden)
Daniel T. L. Shek
2011-01-01
Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
DEFF Research Database (Denmark)
Micaletti, R. C.; Cakmak, A. S.; Nielsen, Søren R. K.
structural properties. The resulting state-space formulation is a system of ordinary stochastic differential equations with random coefficient and deterministic initial conditions which are subsequently transformed into ordinary stochastic differential equations with deterministic coefficients and random......A method for computing the lower-order moments of randomly-excited multi-degree-of-freedom (MDOF) systems with random structural properties is proposed. The method is grounded in the techniques of stochastic calculus, utilizing a Markov diffusion process to model the structural system with random...... initial conditions. This transformation facilitates the derivation of differential equations which govern the evolution of the unconditional statistical moments of response. Primary consideration is given to linear systems and systems with odd polynomial nonlinearities, for in these cases...
International Nuclear Information System (INIS)
Balasubramaniam, P.; Kalpana, M.; Rakkiyappan, R.
2012-01-01
Fuzzy cellular neural networks (FCNNs) are special kinds of cellular neural networks (CNNs). Each cell in an FCNN contains fuzzy operating abilities. The entire network is governed by cellular computing laws. The design of FCNNs is based on fuzzy local rules. In this paper, a linear matrix inequality (LMI) approach for synchronization control of FCNNs with mixed delays is investigated. Mixed delays include discrete time-varying delays and unbounded distributed delays. A dynamic control scheme is proposed to achieve the synchronization between a drive network and a response network. By constructing the Lyapunov—Krasovskii functional which contains a triple-integral term and the free-weighting matrices method an improved delay-dependent stability criterion is derived in terms of LMIs. The controller can be easily obtained by solving the derived LMIs. A numerical example and its simulations are presented to illustrate the effectiveness of the proposed method. (interdisciplinary physics and related areas of science and technology)
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning
Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana
2018-01-01
The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
An overview of solution methods for multi-objective mixed integer linear programming programs
DEFF Research Database (Denmark)
Andersen, Kim Allan; Stidsen, Thomas Riis
Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Hayes, Scott T
2005-01-01
A method is developed for producing deterministic chaotic motion from the linear superposition of a bi-infinite sequence of randomly polarized basis functions. The resultant waveform is also formally a random process in the usual sense. In the example given, a threedimensional embedding produces an idealized version of Lorenz motion. The one-dimensional approximate return map is piecewise linear; a tent or shift, depending on the Poincare section. The results are presented in an informal style so that they are accessible to a wide audience interested in both theory and applications of symbolic dynamics communication
Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding
Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank
2014-01-01
This work proposes a new protocol applying on–the–fly random linear network coding in wireless mesh net-works. The protocol provides increased reliability, low delay,and high throughput to the upper layers, while being obliviousto their specific requirements. This seemingly conflicting goalsare achieved by design, using an on–the–fly network codingstrategy. Our protocol also exploits relay nodes to increasethe overall performance of individual links. Since our protocolnaturally masks random p...
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
International Nuclear Information System (INIS)
Kim, Jin Kyu; Kim, Dong Keon
2016-01-01
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics
Stability Criterion of Linear Stochastic Systems Subject to Mixed H2/Passivity Performance
Directory of Open Access Journals (Sweden)
Cheung-Chieh Ku
2015-01-01
Full Text Available The H2 control scheme and passivity theory are applied to investigate the stability criterion of continuous-time linear stochastic system subject to mixed performance. Based on the stochastic differential equation, the stochastic behaviors can be described as multiplicative noise terms. For the considered system, the H2 control scheme is applied to deal with the problem on minimizing output energy. And the asymptotical stability of the system can be guaranteed under desired initial conditions. Besides, the passivity theory is employed to constrain the effect of external disturbance on the system. Moreover, the Itô formula and Lyapunov function are used to derive the sufficient conditions which are converted into linear matrix inequality (LMI form for applying convex optimization algorithm. Via solving the sufficient conditions, the state feedback controller can be established such that the asymptotical stability and mixed performance of the system are achieved in the mean square. Finally, the synchronous generator system is used to verify the effectiveness and applicability of the proposed design method.
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Directory of Open Access Journals (Sweden)
A. Cabezas
2010-08-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Canepa, Edward S.
2013-01-01
Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill-Whitham- Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for some decision variable. We use this fact to pose the problem of detecting spoofing cyber-attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offline. A numerical implementation is performed on a cyber-attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © 2013 IEEE.
Canepa, Edward S.
2013-09-01
Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill- Whitham-Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data generated by multiple sensors of different types, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for a specific decision variable. We use this fact to pose the problem of detecting spoofing cyber attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offliine. A numerical implementation is performed on a cyber attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © American Institute of Mathematical Sciences.
Optimal placement of capacitors in a radial network using conic and mixed integer linear programming
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2008-06-15
This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)
2016-09-15
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.
Model and measurements of linear mixing in thermal IR ground leaving radiance spectra
Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna
2007-10-01
Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Impacto não Linear do Marketing Mix no Desempenho em Vendas de Marcas
Directory of Open Access Journals (Sweden)
Rafael Barreiros Porto
2015-01-01
Full Text Available O padrão de impacto que as atividades de marketingexercem nas vendas não tem sido evidenciado na literatura. Muitas pesquisas adotam perspectivas lineares restritas, desconsiderando as evidências empíricas. Este trabalho investigou o impacto não linear do marketingmixno volume em vendas e no volume de consumidores e de compra por consumidor. Realizou-se um estudo longitudinal em painel de marcas e de consumidores simultâneos. Analisaram-se 121 marcas durante 13 meses, com 793 compras/mês feitas pelos consumidores por meio de três equações de estimativas generalizadas. Os resultados apontam que o marketing mix, em especial brandinge precificação, impacta fortemente todas as dependentes em formato não linear, com bons ajustes dos parâmetros. Oefeito conjunto gera economias de escala para as marcas, enquanto, para cada consumidor, o efeito conjunto estimula-o a adquirir maiores quantidades gradativamente. A pesquisa demonstra oito padrões impactantes do marketingmixsobre os indicadores investigados, com alterações de sua ordem e de seu peso para marcas e consumidores.
Roerdink, J.B.T.M.
1981-01-01
The cumulant expansion for linear stochastic differential equations is extended to the general case in which the coefficient matrix, the inhomogeneous part and the initial condition are all random and, moreover, statistically interdependent. The expansion now involves not only the autocorrelation
Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding
DEFF Research Database (Denmark)
Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani
2014-01-01
This work proposes a new protocol applying on– the–fly random linear network coding in wireless mesh net- works. The protocol provides increased reliability, low delay, and high throughput to the upper layers, while being oblivious to their specific requirements. This seemingly conflicting goals ...
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Warped linear mixed models for the genetic analysis of transformed phenotypes.
Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver
2014-09-19
Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.
Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures
International Nuclear Information System (INIS)
Liberatore, Laura; Tocci, Cesare; Masiani, Renato
2008-01-01
In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building in which the r.c. internal frames are replaced with masonry walls
Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo
2015-01-01
To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.
Sub-exponential mixing of random billiards driven by thermostats
International Nuclear Information System (INIS)
Yarmola, Tatiana
2013-01-01
We study the class of open continuous-time mechanical particle systems introduced in the paper by Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47). Using the discrete-time results from Khanin and Yarmola (2013 Commun. Math. Phys. 320 121–47) we demonstrate rigorously that, in continuous time, a unique steady state exists and is sub-exponentially mixing. Moreover, all initial distributions converge to the steady state and, for a large class of initial distributions, convergence to the steady state is sub-exponential. The main obstacle to exponential convergence is the existence of slow particles in the system. (paper)
Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling
2018-03-27
Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.
Linear Optimization Techniques for Product-Mix of Paints Production in Nigeria
Directory of Open Access Journals (Sweden)
Sulaimon Olanrewaju Adebiyi
2014-02-01
Full Text Available Many paint producers in Nigeria do not lend themselves to flexible production process which is important for them to manage the use of resources for effective optimal production. These goals can be achieved through the application of optimization models in their resources allocation and utilisation. This research focuses on linear optimization for achieving product- mix optimization in terms of the product identification and the right quantity in paint production in Nigeria for better profit and optimum firm performance. The computational experiments in this research contains data and information on the units item costs, unit contribution margin, maximum resources capacity, individual products absorption rate and other constraints that are particular to each of the five products produced in the company employed as case study. In data analysis, linear programming model was employed with the aid LINDO 11 software to analyse the data. The result has showed that only two out of the five products under consideration are profitable. It also revealed the rate to which the company needs to reduce cost incurred on the three other products before making them profitable for production.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Directory of Open Access Journals (Sweden)
Maoyuan Feng
2014-01-01
Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.
Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2017-04-01
The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced
MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES
Institute of Scientific and Technical Information of China (English)
程乾生
1990-01-01
The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
International Nuclear Information System (INIS)
Shokair, I.R.
1991-01-01
Phase mixing of transverse oscillations changes the nature of the ion hose instability from an absolute to a convective instability. The stronger the phase mixing, the faster an electron beam reaches equilibrium with the guiding ion channel. This is important for long distance propagation of relativistic electron beams where it is desired that transverse oscillations phase mix within a few betatron wavelengths of injection and subsequently an equilibrium is reached with no further beam emittance growth. In the linear regime phase mixing is well understood and results in asymptotic decay of transverse oscillations as 1/Z 2 for a Gaussian beam and channel system, Z being the axial distance measured in betatron wavelengths. In the nonlinear regime (which is likely mode of propagation for long pulse beams) results of the spread mass model indicate that phase mixing is considerably weaker than in the regime. In this paper we consider this problem of phase mixing in the nonlinear regime. Results of the spread mass model will be shown along with a simple analysis of phase mixing for multiple oscillator models. Particle simulations also indicate that phase mixing is weaker in nonlinear regime than in the linear regime. These results will also be shown. 3 refs., 4 figs
Directory of Open Access Journals (Sweden)
Enrique López Calderón
2012-06-01
Full Text Available The aim of this study was to develop a high acceptability mixed nectar and low cost. To obtain the nectar mixed considered different amounts of passion fruit, sweet pepino, sucrose, and completing 100% with water, following a two-stage design: screening (using a design of type 2 3 + 4 center points and optimization (using a design of type 2 2 + 2*2 + 4 center points; stages that allow explore a high acceptability formulation. Then we used the technique of Linear Programming to minimize the cost of high acceptability nectar. Result of this process was obtained a mixed nectar optimal acceptability (score of 7, when the formulation is between 9 and 14% of passion fruit, 4 and 5% of sucrose, 73.5% of sweet pepino juice and filling with water to the 100%. Linear Programming possible reduced the cost of nectar mixed with optimal acceptability at S/.174 for a production of 1000 L/day.
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Optimal Airport Surface Traffic Planning Using Mixed-Integer Linear Programming
Directory of Open Access Journals (Sweden)
P. C. Roling
2008-01-01
Full Text Available We describe an ongoing research effort pertaining to the development of a surface traffic automation system that will help controllers to better coordinate surface traffic movements related to arrival and departure traffic. More specifically, we describe the concept for a taxi-planning support tool that aims to optimize the routing and scheduling of airport surface traffic in such a way as to deconflict the taxi plans while optimizing delay, total taxi-time, or some other airport efficiency metric. Certain input parameters related to resource demand, such as the expected landing times and the expected pushback times, are rather difficult to predict accurately. Due to uncertainty in the input data driving the taxi-planning process, the taxi-planning tool is designed such that it produces solutions that are robust to uncertainty. The taxi-planning concept presented herein, which is based on mixed-integer linear programming, is designed such that it is able to adapt to perturbations in these input conditions, as well as to account for failure in the actual execution of surface trajectories. The capabilities of the tool are illustrated in a simple hypothetical airport.
Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.
Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm
2015-01-01
To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.
Spatiotemporal chaos in mixed linear-nonlinear two-dimensional coupled logistic map lattice
Zhang, Ying-Qian; He, Yi; Wang, Xing-Yuan
2018-01-01
We investigate a new spatiotemporal dynamics with mixing degrees of nonlinear chaotic maps for spatial coupling connections based on 2DCML. Here, the coupling methods are including with linear neighborhood coupling and the nonlinear chaotic map coupling of lattices, and the former 2DCML system is only a special case in the proposed system. In this paper the criteria such Kolmogorov-Sinai entropy density and universality, bifurcation diagrams, space-amplitude and snapshot pattern diagrams are provided in order to investigate the chaotic behaviors of the proposed system. Furthermore, we also investigate the parameter ranges of the proposed system which holds those features in comparisons with those of the 2DCML system and the MLNCML system. Theoretical analysis and computer simulation indicate that the proposed system contains features such as the higher percentage of lattices in chaotic behaviors for most of parameters, less periodic windows in bifurcation diagrams and the larger range of parameters for chaotic behaviors, which is more suitable for cryptography.
A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions
International Nuclear Information System (INIS)
Griffond, J; Soulard, O; Souffland, D
2010-01-01
To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-06-02
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Application of mixed-integer linear programming in a car seats assembling process
Directory of Open Access Journals (Sweden)
Jorge Iván Perez Rave
2011-12-01
Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
A multiple objective mixed integer linear programming model for power generation expansion planning
Energy Technology Data Exchange (ETDEWEB)
Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)
2004-03-01
Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)
Further Improvements to Linear Mixed Models for Genome-Wide Association Studies
Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David
2014-11-01
We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Linearization effect in multifractal analysis: Insights from the Random Energy Model
Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice
2011-08-01
The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.
2008-01-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Role of Statistical Random-Effects Linear Models in Personalized Medicine.
Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose
2012-03-01
Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.
Spillane, James P.; Pareja, Amber Stitziel; Dorner, Lisa; Barnes, Carol; May, Henry; Huff, Jason; Camburn, Eric
2010-01-01
In this paper we described how we mixed research approaches in a Randomized Control Trial (RCT) of a school principal professional development program. Using examples from our study we illustrate how combining qualitative and quantitative data can address some key challenges from validating instruments and measures of mediator variables to…
Examples of mixed-effects modeling with crossed random effects and with binomial data
Quené, H.; van den Bergh, H.
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Wen-Min Zhou
2013-01-01
Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.
Directory of Open Access Journals (Sweden)
Chao Luo
Full Text Available A novel algebraic approach is proposed to study dynamics of asynchronous random Boolean networks where a random number of nodes can be updated at each time step (ARBNs. In this article, the logical equations of ARBNs are converted into the discrete-time linear representation and dynamical behaviors of systems are investigated. We provide a general formula of network transition matrices of ARBNs as well as a necessary and sufficient algebraic criterion to determine whether a group of given states compose an attractor of length[Formula: see text] in ARBNs. Consequently, algorithms are achieved to find all of the attractors and basins in ARBNs. Examples are showed to demonstrate the feasibility of the proposed scheme.
Typed Linear Chain Conditional Random Fields and Their Application to Intrusion Detection
Elfers, Carsten; Horstmann, Mirko; Sohr, Karsten; Herzog, Otthein
Intrusion detection in computer networks faces the problem of a large number of both false alarms and unrecognized attacks. To improve the precision of detection, various machine learning techniques have been proposed. However, one critical issue is that the amount of reference data that contains serious intrusions is very sparse. In this paper we present an inference process with linear chain conditional random fields that aims to solve this problem by using domain knowledge about the alerts of different intrusion sensors represented in an ontology.
Walker, Jeffrey A
2016-01-01
downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F -tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10%) if tested on small effects. The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as "correlated noise" due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F -tests have superior performance, including moderate power for very small effects.
Directory of Open Access Journals (Sweden)
Jeffrey A. Walker
2016-10-01
distributions suggest that the GLS results in downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F-tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10% if tested on small effects. Discussion The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as “correlated noise” due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F-tests have superior performance, including moderate power for very small effects.
Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha
2018-01-01
Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with
Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT
International Nuclear Information System (INIS)
Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.
2009-01-01
In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.
Ferrimagnetic Properties of Bond Dilution Mixed Blume-Capel Model with Random Single-Ion Anisotropy
International Nuclear Information System (INIS)
Liu Lei; Yan Shilei
2005-01-01
We study the ferrimagnetic properties of spin 1/2 and spin-1 systems by means of the effective field theory. The system is considered in the framework of bond dilution mixed Blume-Capel model (BCM) with random single-ion anisotropy. The investigation of phase diagrams and magnetization curves indicates the existence of induced magnetic ordering and single or multi-compensation points. Special emphasis is placed on the influence of bond dilution and random single-ion anisotropy on normal or induced magnetic ordering states and single or multi-compensation points. Normal magnetic ordering states take on new phase diagrams with increasing randomness (bond and anisotropy), while anisotropy induced magnetic ordering states are always occurrence no matter whether concentration of anisotropy is large or small. Existence and disappearance of compensation points rely strongly on bond dilution and random single-ion anisotropy. Some results have not been revealed in previous papers and predicted by Neel theory of ferrimagnetism.
Randomly Generating Four Mixed Bell-Diagonal States with a Concurrences Sum to Unity
International Nuclear Information System (INIS)
Toh, S. P.; Zainuddin Hishamuddin; Foo Kim Eng
2012-01-01
A two-qubit system in quantum information theory is the simplest bipartite quantum system and its concurrence for pure and mixed states is well known. As a subset of two-qubit systems, Bell-diagonal states can be depicted by a very simple geometrical representation of a tetrahedron with sides of length 2√2. Based on this geometric representation, we propose a simple approach to randomly generate four mixed Bell decomposable states in which the sum of their concurrence is equal to one. (general)
Energy Technology Data Exchange (ETDEWEB)
Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.
2016-07-01
Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Edwards, Lloyd J.; Simpson, Sean L.
2010-01-01
The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches are important. The linear mixed mo...
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
International Nuclear Information System (INIS)
Klimenko, A Y
2008-01-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
Linear C32H66 hydrocarbon in the mixed state with C10H22 ...
Indian Academy of Sciences (India)
Unknown
S R Research Laboratory for Studies in Crystallization Phenomena, 10-1-96, ... mixed state with certain shorter chain length homologues (SMOLLENCs), estimated ... Methods. Five hydrocarbons of even carbon numbers, C10, C12, C14, C16 ...
Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar
2012-01-01
In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.
Stable Graphical Model Estimation with Random Forests for Discrete, Continuous, and Mixed Variables
Fellinghauer, Bernd; Bühlmann, Peter; Ryffel, Martin; von Rhein, Michael; Reinhardt, Jan D.
2011-01-01
A conditional independence graph is a concise representation of pairwise conditional independence among many variables. Graphical Random Forests (GRaFo) are a novel method for estimating pairwise conditional independence relationships among mixed-type, i.e. continuous and discrete, variables. The number of edges is a tuning parameter in any graphical model estimator and there is no obvious number that constitutes a good choice. Stability Selection helps choosing this parameter with respect to...
DEFF Research Database (Denmark)
Fitzek, Frank; Toth, Tamas; Szabados, Áron
2014-01-01
This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...
Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks
DEFF Research Database (Denmark)
Heide, J; Zhang, Qi; Fitzek, F H P
2013-01-01
This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...
Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage
DEFF Research Database (Denmark)
Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2015-01-01
Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...
Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services
DEFF Research Database (Denmark)
Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani
2016-01-01
Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...
Linear-scaling implementation of the direct random-phase approximation
International Nuclear Information System (INIS)
Kállay, Mihály
2015-01-01
We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Smith, Paul F; Ganesh, Siva; Liu, Ping
2013-10-30
Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved.
Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W
2018-03-05
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.
An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems
Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri
2018-01-01
The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.
A Mixed-Integer Linear Programming approach to wind farm layout and inter-array cable routing
DEFF Research Database (Denmark)
Fischetti, Martina; Leth, John-Josef; Borchersen, Anders Bech
2015-01-01
A Mixed-Integer Linear Programming (MILP) approach is proposed to optimize the turbine allocation and inter-array offshore cable routing. The two problems are considered with a two steps strategy, solving the layout problem first and then the cable problem. We give an introduction to both problems...... and present the MILP models we developed to solve them. To deal with interference in the onshore cases, we propose an adaptation of the standard Jensen’s model, suitable for 3D cases. A simple Stochastic Programming variant of our model allows us to consider different wind scenarios in the optimization...
A randomized controlled trial of group Stepping Stones Triple P: a mixed-disability trial.
Roux, Gemma; Sofronoff, Kate; Sanders, Matthew
2013-09-01
Stepping Stones Triple P (SSTP) is a parenting program designed for families of a child with a disability. The current study involved a randomized controlled trial of Group Stepping Stones Triple P (GSSTP) for a mixed-disability group. Participants were 52 families of children diagnosed with an Autism Spectrum Disorder, Down syndrome, Cerebral Palsy, or an intellectual disability. The results demonstrated significant improvements in parent-reported child behavior, parenting styles, parental satisfaction, and conflict about parenting. Results among participants were similar despite children's differing impairments. The intervention effect was maintained at 6-month follow-up. The results indicate that GSSTP is a promising intervention for a mixed-disability group. Limitations of the study, along with areas for future research, are also discussed. © FPI, Inc.
Thermodynamics versus Kinetics Dichotomy in the Linear Self-Assembly of Mixed Nanoblocks.
Ruiz, L; Keten, S
2014-06-05
We report classical and replica exchange molecular dynamics simulations that establish the mechanisms underpinning the growth kinetics of a binary mix of nanorings that form striped nanotubes via self-assembly. A step-growth coalescence model captures the growth process of the nanotubes, which suggests that high aspect ratio nanostructures can grow by obeying the universal laws of self-similar coarsening, contrary to systems that grow through nucleation and elongation. Notably, striped patterns do not depend on specific growth mechanisms, but are governed by tempering conditions that control the likelihood of depropagation and fragmentation.
Random crystal field effects on the integer and half-integer mixed-spin system
Yigit, Ali; Albayrak, Erhan
2018-05-01
In this work, we have focused on the random crystal field effects on the phase diagrams of the mixed spin-1 and spin-5/2 Ising system obtained by utilizing the exact recursion relations (ERR) on the Bethe lattice (BL). The distribution function P(Di) = pδ [Di - D(1 + α) ] +(1 - p) δ [Di - D(1 - α) ] is used to randomize the crystal field.The phase diagrams are found to exhibit second- and first-order phase transitions depending on the values of α, D and p. It is also observed that the model displays tricritical point, isolated point, critical end point and three compensation temperatures for suitable values of the system parameters.
Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.
Duffull, Stephen B; Hooker, Andrew C
2017-12-01
Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.
Ossola, Giovanni; Sokal, Alan D
2004-08-01
We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical
Li, Yanning
2013-10-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
Li, Yanning; Canepa, Edward S.; Claudel, Christian G.
2013-01-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model
Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661
Improvement of Characteristics of Clayey Soil Mixed with Randomly Distributed Natural Fibers
Maity, J.; Chattopadhyay, B. C.; Mukherjee, S. P.
2017-11-01
In subgrade construction for flexible road pavement, properties of clayey soils available locally can be improved by providing randomly distributed fibers in the soil. The fibers added in subgrade constructions are expected to provide better compact interlocking system between the fiber and the soil grain, greater resistance to deformation and quicker dissipation of pore water pressure, thus helping consolidation and strengthening. Many natural fibers like jute, coir, sabai grass etc. which are economical and eco-friendly, are grown in abundance in India. If suitable they can be used as additive material in the subgrade soil to result in increase in strength and decrease in deformability. Such application will also reduce the cost of construction of roads, by providing lesser thickness of pavement layer. In this paper, the efficacy of using natural jute, coir or sabai grass fibers with locally available clayey soil has been studied. A series of Standard Proctor test, Soaked and Unsoaked California Bearing Ratio (CBR) test, and Unconfined Compressive Strength test were done on locally available clayey soil mixed with different types of natural fiber for various length and proportion to study the improvement of strength properties of fiber-soil composites placed at optimum moisture content. From the test results, it was observed that there was a substantial increase in CBR value for the clayey soil when mixed with increasing percentage of all three types of randomly distributed natural fibers up to 2% of the dry weight of soil. The CBR attains maximum value when the length for all types of fibers mixed with the clay taken in this study, attains a value of 10 mm.
A mixed integer linear programming model applied in barge planning for Omya
Directory of Open Access Journals (Sweden)
David Bredström
2015-12-01
Full Text Available This article presents a mathematical model for barge transport planning on the river Rhine, which is part of a decision support system (DSS recently taken into use by the Swiss company Omya. The system is operated by Omya’s regional office in Cologne, Germany, responsible for distribution planning at the regional distribution center (RDC in Moerdijk, the Netherlands. The distribution planning is a vital part of supply chain management of Omya’s production of Norwegian high quality calcium carbonate slurry, supplied to European paper manufacturers. The DSS operates within a vendor managed inventory (VMI setting, where the customer inventories are monitored by Omya, who decides upon the refilling days and quantities delivered by barges. The barge planning problem falls into the category of inventory routing problems (IRP and is further characterized with multiple products, heterogeneous fleet with availability restrictions (the fleet is owned by third party, vehicle compartments, dependency of barge capacity on water-level, multiple customer visits, bounded customer inventories and rolling planning horizon. There are additional modelling details which had to be considered to make it possible to employ the model in practice at a sufficient level of detail. To the best of our knowledge, we have not been able to find similar models covering all these aspects in barge planning. This article presents the developed mixed-integer programming model and discusses practical experience with its solution. Briefly, it also puts the model into the context of the entire business case of value chain optimization in Omya.
INVESTIGATION OF SECONDARY MIXED RADIATION FIELD AROUND A MEDICAL LINEAR ACCELERATOR.
Tulik, Piotr; Tulik, Monika; Maciak, Maciej; Golnik, Natalia; Kabat, Damian; Byrski, Tomasz; Lesiak, Jan
2017-09-29
The aim of this study is to investigate secondary mixed radiation field around linac, as the first part of an overall assessment of out-of-field contribution of neutron dose for new advanced radiation dose delivery techniques. All measurements were performed around Varian Clinic 2300 C/D accelerator at Maria Sklodowska-Curie Memorial, Cancer Center and Institute of Oncology, Krakow Branch. Recombination chambers REM-2 and GW2 were used for recombination index of radiation quality Q4 determination (as an estimate of quality factor Q), measurement of total tissue dose Dt and calculation of gamma and neutron components to Dt. Estimation of Dt and Q4 allowed for the ambient dose equivalent H*(10) per monitor unit (MU) calculations. Measurements around linac were performed on the height of the middle of the linac's head (three positions) and on the height of the linac's isocentre (five positions). Estimation of secondary radiation level was carried out for seven different configurations of upper and lower jaws position and multileaf collimator set open or closed in each position. Study includes the use of two photon beam modes: 6 and 18 MV. Spatial distribution of ambient dose equivalent H*(10) per MU on the height of the linac's head and on the standard couch height for patients during the routine treatment, as well as relative contribution of gamma and neutron secondary radiation inside treatment room were evaluated. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ahmed, A Shafath; Charles, P David; Cholan, R; Russia, M; Surya, R; Jailance, L
2015-08-01
This study aimed to evaluate whether the extract of Morinda citrifolia L. mixed with irreversible hydrocolloid powder decreases microbial contamination during impression making without affecting the resulting casts. Twenty volunteers were randomly divided into two groups (n = 10). Group A 30 ml extract of M. citrifolia L diluted in 30 ml of water was mixed to make the impression with irreversible hydrocolloid material. Group B 30 ml deionized water was mixed with irreversible hydrocolloid material to make the impressions following which the surface roughness and dimensional stability of casts were evaluated. Extract of M. citrifolia L. mixed with irreversible hydrocolloid decreased the percentage of microorganisms when compared with water (P impression quality.
Directory of Open Access Journals (Sweden)
Phil Diamond
2003-01-01
Full Text Available Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
International Nuclear Information System (INIS)
Maeda, Atsushi; Sasayama, Tatsuo; Iwai, Takashi; Aizawa, Sakuei; Ohwada, Isao; Aizawa, Masao; Ohmichi, Toshihiko; Handa, Muneo
1988-11-01
Two pins containing uranium-plutonium carbide fuels which are different in stoichiometry, i.e. (U,Pu)C 1.0 and (U,Pu)C 1.1 , were constructed into a capsule, ICF-37H, and were irradiated in JRR-2 up to 1.0 at % burnup at the linear heat rate of 420 W/cm. After being cooled for about one year, the irradiated capsule was transferred to the Reactor Fuel Examination Facility where the non-destructive examinations of the fuel pins in the β-γ cells and the destructive ones in two α-γ inert gas atmosphere cells were carried out. The release rates of fission gas were low enough, 0.44 % from (U,Pu)C 1.0 fuel pin and 0.09% from (U,Pu)C 1.1 fuel pin, which is reasonable because of the low central temperature of fuel pellets, about 1000 deg C and is estimated that the release is mainly governed by recoil and knock-out mechanisms. Volume swelling of the fuels was observed to be in the range of 1.3 ∼ 1.6 % for carbide fuels below 1000 deg C. Respective open porosities of (U,Pu)C 1.0 and (U,Pu)C 1.1 fuel were 1.3 % and 0.45 %, being in accordance with the release behavior of fission gas. Metallographic observation of the radial sections of pellets showed the increase of pore size and crystal grain size in the center and middle region of (U,Pu)C 1.0 pellets. The chemical interaction between fuel pellets and claddings in the carbide fuels is the penetration of carbon in the fuels to stainless steel tubes. The depth of corrosion layer in inner sides of cladding tubes ranged 10 ∼ 15 μm in the (U,Pu)C 1.0 fuel and 15 #approx #25 μm in the (U,Pu)C 1.1 fuel, which is correlative with the carbon potential of fuels posibly affecting the amount of carbon penetration. (author)
Directory of Open Access Journals (Sweden)
H Kazemipoor
2012-04-01
Full Text Available A multi-skilled project scheduling problem (MSPSP has been generally presented to schedule a project with staff members as resources. Each activity in project network requires different skills and also staff members have different skills, too. This causes the MSPSP becomes a special type of a multi-mode resource-constrained project scheduling problem (MM-RCPSP with a huge number of modes. Given the importance of this issue, in this paper, a mixed integer linear programming for the MSPSP is presented. Due to the complexity of the problem, a meta-heuristic algorithm is proposed in order to find near optimal solutions. To validate performance of the algorithm, results are compared against exact solutions solved by the LINGO solver. The results are promising and show that optimal or near-optimal solutions are derived for small instances and good solutions for larger instances in reasonable time.
Directory of Open Access Journals (Sweden)
Mbarek Elbounjimi
2015-11-01
Full Text Available Closed-loop supply chain network design is a critical issue due to its impact on both economic and environmental performances of the supply chain. In this paper, we address the problem of designing a multi-echelon, multi-product and capacitated closed-loop supply chain network. First, a mixed-integer linear programming formulation is developed to maximize the total profit. The main contribution of the proposed model is addressing two economic viability issues of closed-loop supply chain. The first issue is the collection of sufficient quantity of end-of-life products are assured by retailers against an acquisition price. The second issue is exploiting the benefits of colocation of forward facilities and reverse facilities. The presented model is solved by LINGO for some test problems. Computational results and sensitivity analysis are conducted to show the performance of the proposed model.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Ramos, Daniel, E-mail: daniel.ramos@csic.es; Frank, Ian W.; Deotare, Parag B.; Bulu, Irfan; Lončar, Marko [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States)
2014-11-03
We investigate the coupling between mechanical and optical modes supported by coupled, freestanding, photonic crystal nanobeam cavities. We show that localized cavity modes for a given gap between the nanobeams provide weak optomechanical coupling with out-of-plane mechanical modes. However, we show that the coupling can be significantly increased, more than an order of magnitude for the symmetric mechanical mode, due to optical resonances that arise from the interaction of the localized cavity modes with standing waves formed by the reflection from thesubstrate. Finally, amplification of motion for the symmetric mode has been observed and attributed to the strong optomechanical interaction of our hybrid system. The amplitude of these self-sustained oscillations is large enough to put the system into a non-linear oscillation regime where a mixing between the mechanical modes is experimentally observed and theoretically explained.
Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis
2005-04-01
Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.
Furlotte, Nicholas A; Eskin, Eleazar
2015-05-01
Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.
International Nuclear Information System (INIS)
Cummins, J.D.
1965-02-01
With several white noise sources the various transmission paths of a linear multivariable system may be determined simultaneously. This memorandum considers the restrictions on pseudo-random two state sequences to effect simultaneous identification of several transmission paths and the consequential rejection of cross-coupled signals in linear multivariable systems. The conditions for simultaneous identification are established by an example, which shows that the integration time required is large i.e. tends to infinity, as it does when white noise sources are used. (author)
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Abramov, R. V.
2011-12-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.
Specific heat of the Ising linear chain in a Random field
International Nuclear Information System (INIS)
Silva, P.R.; Sa Barreto, F.C. de
1984-01-01
Starting from correlation identities for the Ising model the effect of a random field on the one dimension version of the model is studied. Explicit results for the magnetization, the two-particle correlation function and the specific heat are obtained for an uncorrelated distribution of the random fields. (Author) [pt
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
Cubas, Glória; Valentini, Fernanda; Camacho, Guilherme Brião; Leite, Fábio; Cenci, Maximiliano Sérgio; Pereira-Cenci, Tatiana
2014-01-01
This study aimed to evaluate whether chlorhexidine mixed with irreversible hydrocolloid powder decreases microbial contamination during impression taking without affecting the resulting casts. Twenty volunteers were randomly divided into two groups (n = 10) according to the liquid used for impression taking in conjunction with irreversible hydrocolloid: 0.12% chlorhexidine or water. Surface roughness and dimensional stability of the casts were evaluated. Chlorhexidine mixed with irreversible hydrocolloid decreased the percentage of microorganisms when compared with water (P impression quality.
Directory of Open Access Journals (Sweden)
Pau Baya
2011-05-01
Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.
Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin
2011-01-01
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292
Directory of Open Access Journals (Sweden)
Yu-Pin Liao
2017-11-01
Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.
Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Newton, Kenneth W. (Inventor); Cunningham, Thomas J. (Inventor)
2014-01-01
An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.
International Nuclear Information System (INIS)
Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel
2017-01-01
Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Pseudo-random properties of a linear congruential generator investigated by b-adic diaphony
Stoev, Peter; Stoilova, Stanislava
2017-12-01
In the proposed paper we continue the study of the diaphony, defined in b-adic number system, and we extend it in different directions. We investigate this diaphony as a tool for estimation of the pseudorandom properties of some of the most used random number generators. This is done by evaluating the distribution of specially constructed two-dimensional nets on the base of the obtained random numbers. The aim is to see how the generated numbers are suitable for calculations in some numerical methods (Monte Carlo etc.).
Bayraktar, Turgay
2017-01-01
In this note, we obtain asymptotic expected number of real zeros for random polynomials of the form $$f_n(z)=\\sum_{j=0}^na^n_jc^n_jz^j$$ where $a^n_j$ are independent and identically distributed real random variables with bounded $(2+\\delta)$th absolute moment and the deterministic numbers $c^n_j$ are normalizing constants for the monomials $z^j$ within a weighted $L^2$-space induced by a radial weight function satisfying suitable smoothness and growth conditions.
Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation
DEFF Research Database (Denmark)
Witt, Carsten
2012-01-01
The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1...
A differential calculus for random matrices with applications to (max,+)-linear stochastic systems
Heidergott, B.F.
2001-01-01
We introducet he concept of weak differentiabilityf or randomm atricesa nd therebyo btain closedform analytical expressions for derivatives of functions of random matrices. More specifically, we develop a calculus of weak differentiationf or randomm atricest hat resembles the standardc alculus of
Fermi sea term in the relativistic linear muffin-tin-orbital transport theory for random alloys
Czech Academy of Sciences Publication Activity Database
Turek, Ilja; Kudrnovský, Josef; Drchal, Václav
2014-01-01
Roč. 89, č. 6 (2014), 064405 ISSN 1098-0121 R&D Projects: GA ČR(CZ) GAP204/11/1228 Institutional support: RVO:68081723 ; RVO:68378271 Keywords : electron transport * anomalous Hall effect * random alloys Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.736, year: 2014
de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G
2008-11-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Directory of Open Access Journals (Sweden)
Akyene Tetteh
2017-04-01
Full Text Available Background: Although the Internet boosts business profitability, without certain activities like efficient transportation, scheduling, products ordered via the Internet may reach their destination very late. The environmental problems (vehicle part disposal, carbon monoxide [CO], nitrogen oxide [NOx] and hydrocarbons [HC] associated with transportation are mostly not accounted for by industries. Objectives: The main objective of this article is to minimising negative externalities cost in e-commerce environments. Method: The 0-1 mixed integer linear programming (0-1 MILP model was used to model the problem statement. The result was further analysed using the externality percentage impact factor (EPIF. Results: The simulation results suggest that (1 The mode of ordering refined petroleum products does not impact on the cost of distribution, (2 an increase in private cost is directly proportional to the externality cost, (3 externality cost is largely controlled by the government and number of vehicles used in the distribution and this is in no way influenced by the mode of request (i.e. Internet or otherwise and (4 externality cost may be reduce by using more ecofriendly fuel system.
Catallo, Cristina; Jack, Susan M.; Ciliska, Donna; MacMillan, Harriet L.
2013-01-01
Little is known about how to systematically integrate complex qualitative studies within the context of randomized controlled trials. A two-phase sequential explanatory mixed methods study was conducted in Canada to understand how women decide to disclose intimate partner violence in emergency department settings. Mixing a RCT (with a subanalysis of data) with a grounded theory approach required methodological modifications to maintain the overall rigour of this mixed methods study. Modifications were made to the following areas of the grounded theory approach to support the overall integrity of the mixed methods study design: recruitment of participants, maximum variation and negative case sampling, data collection, and analysis methods. Recommendations for future studies include: (1) planning at the outset to incorporate a qualitative approach with a RCT and to determine logical points during the RCT to integrate the qualitative component and (2) consideration for the time needed to carry out a RCT and a grounded theory approach, especially to support recruitment, data collection, and analysis. Data mixing strategies should be considered during early stages of the study, so that appropriate measures can be developed and used in the RCT to support initial coding structures and data analysis needs of the grounded theory phase. PMID:23577245
Kron, Frederick W; Fetters, Michael D; Scerbo, Mark W; White, Casey B; Lypson, Monica L; Padilla, Miguel A; Gliva-McConvey, Gayle A; Belfore, Lee A; West, Temple; Wallace, Amelia M; Guetterman, Timothy C; Schleicher, Lauren S; Kennedy, Rebecca A; Mangrulkar, Rajesh S; Cleary, James F; Marsella, Stacy C; Becker, Daniel M
2017-04-01
To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group's experiences and learning preferences. A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR's intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. MPathic-VR's virtual human simulation offers an effective and engaging means of advanced communication training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.
2016-01-01
Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846
A randomized controlled trial of a telehealth parenting intervention: A mixed-disability trial.
Hinton, Sharon; Sheffield, Jeanie; Sanders, Matthew R; Sofronoff, Kate
2017-06-01
The quality of parenting a child receives has a major impact on development, wellbeing and future life opportunities. This study examined the efficacy of Triple P Online - Disability (TPOL-D) a telehealth intervention for parents of children with a disability. Ninety-eight parents and carers of children aged 2-12 years diagnosed with a range of developmental, intellectual and physical disabilities were randomly assigned to either the intervention (51) or treatment-as-usual (47) control group. At post-intervention parents receiving the TPOL-D intervention demonstrated significant improvements in parenting practices and parenting self-efficacy, however a significant change in parent-reported child behavioral and emotional problems was not detected. At 3-month follow up intervention gains were maintained and/or enhanced. A significant decrease in parent-reported child behavioral and emotional problems was also detected at this time. The results indicate that TPOL-D is a promising telehealth intervention for a mixed-disability group. Copyright © 2017 Elsevier Ltd. All rights reserved.
Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2017-01-01
The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.
Faster Simulation Methods for the Non-Stationary Random Vibrations of Non-Linear MDOF Systems
DEFF Research Database (Denmark)
Askar, A.; Köylüoglu, H. U.; Nielsen, Søren R. K.
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
Faster Simulation Methods for the Nonstationary Random Vibrations of Non-linear MDOF Systems
DEFF Research Database (Denmark)
Askar, A.; Köylüo, U.; Nielsen, Søren R.K.
1996-01-01
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
Linear bioconvection in a suspension of randomly swimming, gyrotactic micro-organisms
DEFF Research Database (Denmark)
Bees, Martin Alan; Hill, N.A.
1998-01-01
We have analyzed the initiation of pattern formation in a layer of finite depth for Pedley and Kessler's new model [J. Fluid Mech. 212, 155 (1990)] of bioconvection. This is the first analysis of bioconvection in a realistic geometry using a model that deals with random swimming in a rational...... manner. We have considered the effects of a distribution of swimming speeds, which has not previously received attention in theoretical papers and find that it is important in calculating the diffusivity. Our predictions of initial pattern wavelengths are reasonably close to the observed ones but better...
Mamhidir, Anna-Greta; Sjölund, Britt-Marie; Fläckman, Birgitta; Wimo, Anders; Sköldunger, Anders; Engström, Maria
2017-02-28
Chronic pain affects nursing home residents' daily life. Pain assessment is central to adequate pain management. The overall aim was to investigate effects of a pain management intervention on nursing homes residents and to describe staffs' experiences of the intervention. A cluster-randomized trial and a mixed-methods approach. Randomized nursing home assignment to intervention or comparison group. The intervention group after theoretical and practical training sessions, performed systematic pain assessments using predominately observational scales with external and internal facilitators supporting the implementation. No measures were taken in the comparison group; pain management continued as before, but after the study corresponding training was provided. Resident data were collected baseline and at two follow-ups using validated scales and record reviews. Nurse group interviews were carried out twice. Primary outcome measures were wellbeing and proxy-measured pain. Secondary outcome measures were ADL-dependency and pain documentation. Using both non-parametric statistics on residential level and generalized estimating equation (GEE) models to take clustering effects into account, the results revealed non-significant interaction effects for the primary outcome measures, while for ADL-dependency using Katz-ADL there was a significant interaction effect. Comparison group (n = 66 residents) Katz-ADL values showed increased dependency over time, while the intervention group demonstrated no significant change over time (n = 98). In the intervention group, 13/44 residents showed decreased pain scores over the period, 14/44 had no pain score changes ≥ 30% in either direction measured with Doloplus-2. Furthermore, 17/44 residents showed increased pain scores ≥ 30% over time, indicating pain/risk for pain; 8 identified at the first assessment and 9 were new, i.e. developed pain over time. No significant changes in the use of drugs was found in any of
Zoellner, Jamie; Cook, Emily; Chen, Yvonnes; You, Wen; Davy, Brenda; Estabrooks, Paul
2013-02-01
This Excessive sugar-sweetened beverage (SSB) consumption and low health literacy skills have emerged as two public health concerns in the United States (US); however, there is limited research on how to effectively address these issues among adults. As guided by health literacy concepts and the Theory of Planned Behavior (TPB), this randomized controlled pilot trial applied the RE-AIM framework and a mixed methods approach to examine a sugar-sweetened beverage (SSB) intervention (SipSmartER), as compared to a matched-contact control intervention targeting physical activity (MoveMore). Both 5-week interventions included two interactive group sessions and three support telephone calls. Executing a patient-centered developmental process, the primary aim of this paper was to evaluate patient feedback on intervention content and structure. The secondary aim was to understand the potential reach (i.e., proportion enrolled, representativeness) and effectiveness (i.e. health behaviors, theorized mediating variables, quality of life) of SipSmartER. Twenty-five participants were randomized to SipSmartER (n=14) or MoveMore (n=11). Participants' intervention feedback was positive, ranging from 4.2-5.0 on a 5-point scale. Qualitative assessments reavealed several opportunties to improve clarity of learning materials, enhance instructions and communication, and refine research protocols. Although SSB consumption decreased more among the SipSmartER participants (-256.9 ± 622.6 kcals), there were no significant group differences when compared to control participants (-199.7 ± 404.6 kcals). Across both groups, there were significant improvements for SSB attitudes, SSB behavioral intentions, and two media literacy constructs. The value of using a patient-centered approach in the developmental phases of this intervention was apparent, and pilot findings suggest decreased SSB may be achieved through targeted health literacy and TPB strategies. Future efforts are needed to examine
Energy Technology Data Exchange (ETDEWEB)
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
Directory of Open Access Journals (Sweden)
Nahed S. Hussein
2014-01-01
Full Text Available A numerical boundary integral scheme is proposed for the solution to the system of eld equations of plane. The stresses are prescribed on one-half of the circle, while the displacements are given. The considered problem with mixed boundary conditions in the circle is replaced by two problems with homogeneous boundary conditions, one of each type, having a common solution. The equations are reduced to a system of boundary integral equations, which is then discretized in the usual way, and the problem at this stage is reduced to the solution to a rectangular linear system of algebraic equations. The unknowns in this system of equations are the boundary values of four harmonic functions which define the full elastic solution and the unknown boundary values of stresses or displacements on proper parts of the boundary. On the basis of the obtained results, it is inferred that a stress component has a singularity at each of the two separation points, thought to be of logarithmic type. The results are discussed and boundary plots are given. We have also calculated the unknown functions in the bulk directly from the given boundary conditions using the boundary collocation method. The obtained results in the bulk are discussed and three-dimensional plots are given. A tentative form for the singular solution is proposed and the corresponding singular stresses and displacements are plotted in the bulk. The form of the singular tangential stress is seen to be compatible with the boundary values obtained earlier. The efficiency of the used numerical schemes is discussed.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M
2018-04-01
Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.
Guzzo, M. M.; Holanda, P. C.; Reggiani, N.
2003-08-01
The neutrino energy spectrum observed in KamLAND is compatible with the predictions based on the Large Mixing Angle realization of the MSW (Mikheyev-Smirnov-Wolfenstein) mechanism, which provides the best solution to the solar neutrino anomaly. From the agreement between solar neutrino data and KamLAND observations, we can obtain the best fit values of the mixing angle and square difference mass. When doing the fitting of the MSW predictions to the solar neutrino data, it is assumed the solar matter do not have any kind of perturbations, that is, it is assumed the the matter density monothonically decays from the center to the surface of the Sun. There are reasons to believe, nevertheless, that the solar matter density fluctuates around the equilibrium profile. In this work, we analysed the effect on the Large Mixing Angle parameters when the density matter randomically fluctuates around the equilibrium profile, solving the evolution equation in this case. We find that, in the presence of these density perturbations, the best fit values of the mixing angle and the square difference mass assume smaller values, compared with the values obtained for the standard Large Mixing Angle Solution without noise. Considering this effect of the random perturbations, the lowest island of allowed region for KamLAND spectral data in the parameter space must be considered and we call it very-low region.
Kilpela, Lisa Smith; Blomquist, Kerstin; Verzijl, Christina; Wilfred, Salomé; Beyl, Robbie; Becker, Carolyn Black
2016-06-01
The Body Project is a cognitive dissonance-based body image improvement program with ample research support among female samples. More recently, researchers have highlighted the extent of male body dissatisfaction and disordered eating behaviors; however, boys/men have not been included in the majority of body image improvement programs. This study aims to explore the efficacy of a mixed-gender Body Project compared with the historically female-only body image intervention program. Participants included male and female college students (N = 185) across two sites. We randomly assigned women to a mixed-gender modification of the two-session, peer-led Body Project (MG), the two-session, peer-led, female-only (FO) Body Project, or a waitlist control (WL), and men to either MG or WL. Participants completed self-report measures assessing negative affect, appearance-ideal internalization, body satisfaction, and eating disorder pathology at baseline, post-test, and at 2- and 6-month follow-up. Linear mixed effects modeling to estimate the change from baseline over time for each dependent variable across conditions were used. For women, results were mixed regarding post-intervention improvement compared with WL, and were largely non-significant compared with WL at 6-month follow-up. Alternatively, results indicated that men in MG consistently improved compared with WL through 6-month follow-up on all measures except negative affect and appearance-ideal internalization. Results differed markedly between female and male samples, and were more promising for men than for women. Various explanations are provided, and further research is warranted prior to drawing firm conclusions regarding mixed-gender programming of the Body Project. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2016; 49:591-602). © 2016 Wiley Periodicals, Inc.
He, Yong-Heng; Tang, Zhi-Jun; Xu, Xiang-Tong; Huang, De-Quan; Zhang, Li-Shun; Tang, Qing-Zhu; Fan, Zhi-Min; Zou, Xian-Jun; Zou, Guo-Jun; Zhang, Chong-Yang; Hu, Fan; Xie, Biao; Li, Yan-Hua; Tong, Yao; Liu, Hong-Chang; Li, Ke; Luo, Yu-Lian; Liu, Fei; Situ, Guang-Wei; Liu, Zuo-Long
2017-12-01
To explore the safety and efficacy of Ruiyun procedure for hemorrhoids (RPH) or RPH with the simplified Milligan-Morgan hemorrhoidectomy (sMMH) in the treatment of mixed hemorrhoids. This is a randomized, controlled, balanced, multicenter study of 3000 patients with mixed hemorrhoids. The outcomes and postoperative complications were compared between 5 types of surgeries. The efficacy rate was the highest in patients who received RPH+sMMH and decreased in the following order: patients who received RPH alone, MMH alone, procedure for prolapse and hemorrhoids (PPH) alone, and PPH+sMMH ( P order: patients who received RPH+sMMH, PPH alone, MMH alone, and PPH+sMMH ( P order: PPH alone, RPH+sMMH, PPH+sMMH, and MMH alone ( P mixed hemorrhoids.
Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades
2017-11-01
Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
International Nuclear Information System (INIS)
Steinbrecher, Gyoergy; Weyssow, B.
2004-01-01
The extreme heavy tail and the power-law decay of the turbulent flux correlation observed in hot magnetically confined plasmas are modeled by a system of coupled Langevin equations describing a continuous time linear randomly amplified stochastic process where the amplification factor is driven by a superposition of colored noises which, in a suitable limit, generate a fractional Brownian motion. An exact analytical formula for the power-law tail exponent β is derived. The extremely small value of the heavy tail exponent and the power-law distribution of laminar times also found experimentally are obtained, in a robust manner, for a wide range of input values, as a consequence of the (asymptotic) self-similarity property of the noise spectrum. As a by-product, a new representation of the persistent fractional Brownian motion is obtained
Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian
2017-04-11
A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.
Iannotti, Lora L; Dulience, Sherlie Jean Louis; Green, Jamie; Joseph, Saminetha; François, Judith; Anténor, Marie-Lucie; Lesorogol, Carolyn; Mounce, Jacqueline; Nickerson, Nathan M
2014-01-01
Haiti has experienced rapid urbanization that has exacerbated poverty and undernutrition in large slum areas. Stunting affects 1 in 5 young children. We aimed to test the efficacy of a daily lipid-based nutrient supplement (LNS) for increased linear growth in young children. Healthy, singleton infants aged 6-11 mo (n = 589) were recruited from an urban slum of Cap Haitien and randomly assigned to receive: 1) a control; 2) a 3-mo LNS; or 3) a 6-mo LNS. The LNS provided 108 kcal and other nutrients including vitamin A, vitamin B-12, iron, and zinc at ≥80% of the recommended amounts. Infants were followed monthly on growth, morbidity, and developmental outcomes over a 6-mo intervention period and at one additional time point 6 mo postintervention to assess sustained effects. The Bonferroni multiple comparisons test was applied, and generalized least-squares (GLS) regressions with mixed effects was used to examine impacts longitudinally. Baseline characteristics did not differ by trial arm except for a higher mean age in the 6-mo LNS group. GLS modeling showed LNS supplementation for 6 mo significantly increased the length-for-age z score (±SE) by 0.13 ± 0.05 and the weight-for-age z score by 0.12 ± 0.02 compared with in the control group after adjustment for child age (P < 0.001). The effects were sustained 6 mo postintervention. Morbidity and developmental outcomes did not differ by trial arm. A low-energy, fortified product improved the linear growth of young children in this urban setting. The trial was registered at clinicaltrials.gov as NCT01552512.
Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J
2014-08-27
State-of-the-art protein-ligand docking methods are generally limited by the traditionally low accuracy of their scoring functions, which are used to predict binding affinity and thus vital for discriminating between active and inactive compounds. Despite intensive research over the years, classical scoring functions have reached a plateau in their predictive performance. These assume a predetermined additive functional form for some sophisticated numerical features, and use standard multivariate linear regression (MLR) on experimental data to derive the coefficients. In this study we show that such a simple functional form is detrimental for the prediction performance of a scoring function, and replacing linear regression by machine learning techniques like random forest (RF) can improve prediction performance. We investigate the conditions of applying RF under various contexts and find that given sufficient training samples RF manages to comprehensively capture the non-linearity between structural features and measured binding affinities. Incorporating more structural features and training with more samples can both boost RF performance. In addition, we analyze the importance of structural features to binding affinity prediction using the RF variable importance tool. Lastly, we use Cyscore, a top performing empirical scoring function, as a baseline for comparison study. Machine-learning scoring functions are fundamentally different from classical scoring functions because the former circumvents the fixed functional form relating structural features with binding affinities. RF, but not MLR, can effectively exploit more structural features and more training samples, leading to higher prediction performance. The future availability of more X-ray crystal structures will further widen the performance gap between RF-based and MLR-based scoring functions. This further stresses the importance of substituting RF for MLR in scoring function development.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
Directory of Open Access Journals (Sweden)
Yamaguchi David K
2006-03-01
Full Text Available Abstract Background Xylitol is a naturally occurring sugar substitute that has been shown to reduce the level of mutans streptococci in plaque and saliva and to reduce tooth decay. It has been suggested that the degree of reduction is dependent on both the amount and the frequency of xylitol consumption. For xylitol to be successfully and cost-effectively used in public health prevention strategies dosing and frequency guidelines should be established. This study determined the reduction in mutans streptococci levels in plaque and unstimulated saliva to increasing frequency of xylitol gum use at a fixed total daily dose of 10.32 g over five weeks. Methods Participants (n = 132 were randomized to either active groups (10.32 g xylitol/day or a placebo control (9.828 g sorbitol and 0.7 g maltitol/day. All groups chewed 12 pieces of gum per day. The control group chewed 4 times/day and active groups chewed xylitol gum at a frequency of 2 times/day, 3 times/day, or 4 times/day. The 12 gum pieces were evenly divided into the frequency assigned to each group. Plaque and unstimulated saliva samples were taken at baseline and five-weeks and were cultured on modified Mitis Salivarius agar for mutans streptococci enumeration. Results There were no significant differences in mutans streptococci level among the groups at baseline. At five-weeks, mutans streptococci levels in plaque and unstimulated saliva showed a linear reduction with increasing frequency of xylitol chewing gum use at the constant daily dose. Although the difference observed for the group that chewed xylitol 2 times/day was consistent with the linear model, the difference was not significant. Conclusion There was a linear reduction in mutans streptococci levels in plaque and saliva with increasing frequency of xylitol gum use at a constant daily dose. Reduction at a consumption frequency of 2 times per day was small and consistent with the linear-response line but was not statistically
DEFF Research Database (Denmark)
Sahana, Goutam; Mailund, Thomas; Lund, Mogens Sandø
2011-01-01
be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called ‘GENMIX (genealogy based mixed model)’ which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. Subjects and Methods: We validated......Introduction: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS) is unified mixed model analysis (MMA). This approach is very flexible, can be applied to both family-based and population-based samples, and can...
Freitas, Maria Cristina Carvalho de Almendra; Fagundes, Ticiane Cestari; Modena, Karin Cristina da Silva; Cardia, Guilherme Saintive; Navarro, Maria Fidela de Lima
2018-01-18
This prospective, randomized, split-mouth clinical trial evaluated the clinical performance of conventional glass ionomer cement (GIC; Riva Self-Cure, SDI), supplied in capsules or in powder/liquid kits and placed in Class I cavities in permanent molars by the Atraumatic Restorative Treatment (ART) approach. A total of 80 restorations were randomly placed in 40 patients aged 11-15 years. Each patient received one restoration with each type of GIC. The restorations were evaluated after periods of 15 days (baseline), 6 months, and 1 year, according to ART criteria. Wilcoxon matched pairs, multivariate logistic regression, and Gehan-Wilcoxon tests were used for statistical analysis. Patients were evaluated after 15 days (n=40), 6 months (n=34), and 1 year (n=29). Encapsulated GICs showed significantly superior clinical performance compared with hand-mixed GICs at baseline (p=0.017), 6 months (p=0.001), and 1 year (p=0.026). For hand-mixed GIC, a statistically significant difference was only observed over the period of baseline to 1 year (p=0.001). Encapsulated GIC presented statistically significant differences for the following periods: 6 months to 1 year (p=0.028) and baseline to 1 year (p=0.002). Encapsulated GIC presented superior cumulative survival rate than hand-mixed GIC over one year. Importantly, both GICs exhibited decreased survival over time. Encapsulated GIC promoted better ART performance, with an annual failure rate of 24%; in contrast, hand-mixed GIC demonstrated a failure rate of 42%.
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2010-01-01
Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....
International Nuclear Information System (INIS)
Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.
1990-01-01
Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-12-31
This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-31
This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Abramov, Rafail V.
2011-01-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation prop...
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Directory of Open Access Journals (Sweden)
Ashwin Patkar
Full Text Available OBJECTIVE: To examine the efficacy of ziprasidone vs. placebo for the depressive mixed state in patients with bipolar disorder type II or major depressive disorder (MDD. METHODS: 73 patients were randomized in a double-blinded, placebo-controlled study to ziprasidone (40-160 mg/d or placebo for 6 weeks. They met DSM-IV criteria for a major depressive episode (MDE, while also meeting 2 or 3 (but not more nor less DSM-IV manic criteria. They did not meet DSM-IV criteria for a mixed or manic episode. Baseline psychotropic drugs were continued unchanged. The primary endpoint measured was Montgomery-Åsberg Depression Rating Scale (MADRS scores over time. The mean dose of ziprasidone was 129.7±45.3 mg/day and 126.1±47.1 mg/day for placebo. RESULTS: The primary outcome analysis indicated efficacy of ziprasidone versus placebo (p = 0.0038. Efficacy was more pronounced in type II bipolar disorder than in MDD (p = 0.036. Overall ziprasidone was well tolerated, without notable worsening of weight or extrapyramidal symptoms. CONCLUSIONS: There was a statistically significant benefit with ziprasidone versus placebo in this first RCT of any medication for the provisional diagnostic concept of the depressive mixed state. TRIAL REGISTRATION: Clinicaltrials.gov NCT00490542.
Statistical mechanics of the mixed majority-minority game with random external information
International Nuclear Information System (INIS)
Martino, A de; Giardina, I; Mosetti, G
2003-01-01
We study the asymptotic macroscopic properties of the mixed majority-minority game, modelling a population in which two types of heterogeneous adaptive agents, namely 'fundamentalists' driven by differentiation and 'trend-followers' driven by imitation, interact. The presence of a fraction f of trend-followers is shown to induce (a) a significant loss of informational efficiency with respect to a pure minority game (in particular, an efficient, unpredictable phase exists only for f 1/2. We solve the model by means of an approximate static (replica) theory and by a direct dynamical (generating functional) technique. The two approaches coincide and match numerical results convincingly
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
Sung, Vivian W.; Borello-France, Diane; Dunivan, Gena; Gantz, Marie; Lukacz, Emily S.; Moalli, Pamela; Newman, Diane K.; Richter, Holly E.; Ridgeway, Beri; Smith, Ariana L.; Weidner, Alison C.; Meikle, Susan
2016-01-01
Introduction Mixed urinary incontinence (MUI) can be a challenging condition to manage. We describe the protocol design and rationale for the Effects of Surgical Treatment Enhanced with Exercise for Mixed Urinary Incontinence (ESTEEM) trial, designed to compare a combined conservative and surgical treatment approach versus surgery alone for improving patient-centered MUI outcomes at 12 months. Methods ESTEEM is a multi-site, prospective, randomized trial of female participants with MUI randomized to a standardized perioperative behavioral/pelvic floor exercise intervention plus midurethral sling versus midurethral sling alone. We describe our methods and four challenges encountered during the design phase: defining the study population, selecting relevant patient-centered outcomes, determining sample size estimates using a patient-reported outcome measure, and designing an analysis plan that accommodates MUI failure rates. A central theme in the design was patient-centeredness, which guided many key decisions. Our primary outcome is patient-reported MUI symptoms measured using the Urogenital Distress Inventory (UDI) score at 12 months. Secondary outcomes include quality of life, sexual function, cost-effectiveness, time to failure and need for additional treatment. Results The final study design was implemented in November 2013 across 8 clinical sites in the Pelvic Floor Disorders Network. As of February 27, 2016, 433 total /472 targeted participants have been randomized. Conclusions We describe the ESTEEM protocol and our methods for reaching consensus for methodological challenges in designing a trial for MUI by maintaining the patient perspective at the core of key decisions. This trial will provide information that can directly impact patient care and clinical decision-making. PMID:27287818
DEFF Research Database (Denmark)
and straightforward idea is to interpret effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen...... on a multifactorial sensory profile data set and compared to actual d-prime calculations based on ordinal regression modelling through the ordinal package. A generic ``plug-in'' implementation of the method is given in the SensMixed package, which again depends on the lmerTest package. We discuss and clarify the bias...
Directory of Open Access Journals (Sweden)
Grzegorz Lukasz Fojecki, MD
2018-03-01
Fojecki GL, Tiessen S, Osther PJS. Effect of Linear Low-Intensity Extracorporeal Shockwave Therapy for Erectile Dysfunction—12-Month Follow-Up of a Randomized, Double-Blinded, Sham-Controlled Study. Sex Med 2018;6:1–7.
Directory of Open Access Journals (Sweden)
Chaudhry Shazia H
2009-07-01
Full Text Available Abstract Background Cluster randomized trials are an increasingly important methodological tool in health research. In cluster randomized trials, intact social units or groups of individuals, such as medical practices, schools, or entire communities – rather than individual themselves – are randomly allocated to intervention or control conditions, while outcomes are then observed on individual cluster members. The substantial methodological differences between cluster randomized trials and conventional randomized trials pose serious challenges to the current conceptual framework for research ethics. The ethical implications of randomizing groups rather than individuals are not addressed in current research ethics guidelines, nor have they even been thoroughly explored. The main objectives of this research are to: (1 identify ethical issues arising in cluster trials and learn how they are currently being addressed; (2 understand how ethics reviews of cluster trials are carried out in different countries (Canada, the USA and the UK; (3 elicit the views and experiences of trial participants and cluster representatives; (4 develop well-grounded guidelines for the ethical conduct and review of cluster trials by conducting an extensive ethical analysis and organizing a consensus process; (5 disseminate the guidelines to researchers, research ethics boards (REBs, journal editors, and research funders. Methods We will use a mixed-methods (qualitative and quantitative approach incorporating both empirical and conceptual work. Empirical work will include a systematic review of a random sample of published trials, a survey and in-depth interviews with trialists, a survey of REBs, and in-depth interviews and focus group discussions with trial participants and gatekeepers. The empirical work will inform the concurrent ethical analysis which will lead to a guidance document laying out principles, policy options, and rationale for proposed guidelines. An
Energy Technology Data Exchange (ETDEWEB)
Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)
1989-12-01
In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.
Energy Technology Data Exchange (ETDEWEB)
Egbe, Daniel A.M.; Adam, Getachew; Pivrikas, Almantas; Ulbricht, Christoph; Ramil, Alberto M.; Sariciftci, Niyazi Serdar [Johannes Kepler Univ., Linz (AT). Linz Inst. for Organic Solar Cells (LIOS); Hoppe, Harald [Technische Univ. Ilmenau (Germany). Inst. of Physics and Inst. of Micro- and Nanotechnologies; Rathgeber, Silke [Mainz Univ. (Germany). Inst. of Physics
2010-07-01
The random distribution of segments of linear octyloxy side chains and of branched 2-ethylhexyloxy side chains, on the backbone of anthracene containing poly(p-phenylene-ethynylene)-alt-poly(p-phenylene-vinylene) (PPE-PPV) has resulted in a side chain based statistical copolymer, denoted AnE-PVstat, showing optimized features as compared to the well defined homologues AnE-PVaa, -ab, -ba and -bb, whose constitutional units are incorporated into its backbone. WAXS studies on AnE-P's demonstrate the highest degree of order at the self-assembly state of AnE-PVstat, which is confirmed by its highly structured thin film absorption band. Electric field independent charge carrier mobility ({mu}{sub hole}) for AnE-PVstat was demonstrated by CELIV and OFET measurements, both methods resulting in similar {mu}{sub hole} values of up to 5.43 x 10{sup -4} cm{sup 2}/Vs. Upon comparison, our results show that charge carrier mobility as measured by CELIV technique is predominantly an intrachain process and less an interchain one, which is in line with past photoconductivity results from PPE-PPV based materials. The present side chain distribution favors efficient solar cell active layer phase separation. As a result, a smaller amount of PC{sub 60}BM is needed to achieve relatively high energy conversion efficiencies above 3 %. The efficiency of {eta}{sub AM1.5} {approx} 3.8 % obtained for AnE-PVstat:PC{sub 60}BM blend is presently the state-of-art value for PPV-based materials. (orig.)
Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.
2018-06-01
Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.
Mixed spin Ising model with four-spin interaction and random crystal field
International Nuclear Information System (INIS)
Benayad, N.; Ghliyem, M.
2012-01-01
The effects of fluctuations of the crystal field on the phase diagram of the mixed spin-1/2 and spin-1 Ising model with four-spin interactions are investigated within the finite cluster approximation based on a single-site cluster theory. The state equations are derived for the two-dimensional square lattice. It has been found that the system exhibits a variety of interesting features resulting from the fluctuation of the crystal field interactions. In particular, for low mean value D of the crystal field, the critical temperature is not very sensitive to fluctuations and all transitions are of second order for any value of the four-spin interactions. But for relatively high D, the transition temperature depends on the fluctuation of the crystal field, and the system undergoes tricritical behaviour for any strength of the four-spin interactions. We have also found that the model may exhibit reentrance for appropriate values of the system parameters.
Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong
2009-02-01
A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.
Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process
Salvi, Michele; Simenhaus, François
2018-03-01
We consider a random walk in dimension d≥1 in a dynamic random environment evolving as an interchange process with rate γ >0 . We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1 ; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.
Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process
Salvi, Michele; Simenhaus, François
2018-05-01
We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.
Directory of Open Access Journals (Sweden)
Goutam Sahana
Full Text Available INTRODUCTION: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS is unified mixed model analysis (MMA. This approach is very flexible, can be applied to both family-based and population-based samples, and can be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called 'GENMIX (genealogy based mixed model' which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. SUBJECTS AND METHODS: We validated GENMIX using genotyping data of Danish Jersey cattle and simulated phenotype and compared to the MMA. We simulated scenarios for three levels of heritability (0.21, 0.34, and 0.64, seven levels of MAF (0.05, 0.10, 0.15, 0.20, 0.25, 0.35, and 0.45 and five levels of QTL effect (0.1, 0.2, 0.5, 0.7 and 1.0 in phenotypic standard deviation unit. Each of these 105 possible combinations (3 h(2 x 7 MAF x 5 effects of scenarios was replicated 25 times. RESULTS: GENMIX provides a better ranking of markers close to the causative locus' location. GENMIX outperformed MMA when the QTL effect was small and the MAF at the QTL was low. In scenarios where MAF was high or the QTL affecting the trait had a large effect both GENMIX and MMA performed similarly. CONCLUSION: In discovery studies, where high-ranking markers are identified and later examined in validation studies, we therefore expect GENMIX to enrich candidates brought to follow-up studies with true positives over false positives more than the MMA would.
A randomized controlled trial of a novel mixed monoamine reuptake inhibitor in adults with ADHD
Directory of Open Access Journals (Sweden)
Wesnes Keith
2008-06-01
Full Text Available Abstract Background NS2359 is a potent reuptake blocker of noradrenalin, dopamine, and serotonin. The aim of the study was to investigate the efficacy, safety and cognitive function of NS2359 in adults with a DSM IV diagnosis of ADHD. Methods The study was a multi-centre, double-blind, randomized placebo-controlled, parallel group design in outpatient adults (18–55 years testing 0.5 mg NS2359 vs. placebo for 8 weeks. Multiple assessments including computerized neuropsychological evaluation were performed. Results There was no significant difference between NS2359 (n = 63 versus placebo (n = 63 on the primary outcome measure reduction in investigator rated ADHD-RS total score (7.8 versus 6.4; p Conclusion No overall effect of NS2359 was found on overall symptoms of ADHD. There was also a modest signal of improvement in the inattentive adults with ADHD and cognition warranting further exploration using differing doses.
2013-01-01
that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437
Energy Technology Data Exchange (ETDEWEB)
Waddell, Lucas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Muldoon, Frank [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Henry, Stephen Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Matthew John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zwerneman, April Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Backlund, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melander, Darryl J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lawton, Craig R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice, Roy Eugene [Teledyne Brown Engineering, Huntsville, AL (United States)
2017-09-01
In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor- ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.
Half-trek criterion for generic identifiability of linear structural equation models
Foygel, R.; Draisma, J.; Drton, M.
2012-01-01
A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations
Half-trek criterion for generic identifiability of linear structural equation models
Foygel, R.; Draisma, J.; Drton, M.
2011-01-01
A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Directory of Open Access Journals (Sweden)
Thiago Augusto da Cunha
2013-01-01
Full Text Available Reliable growth data from trees are important to establish a rational forest management. Characteristics from trees, like the size, crown architecture and competition indices have been used to mathematically describe the increment efficiently when associated with them. However, the precise role of these effects in the growth-modeling destined to tropical trees needs to be further studied. Here it is reconstructed the basal area increment (BAI of individual Cedrela odorata trees, sampled at Amazon forest, to develop a growth- model using potential-predictors like: (1 classical tree size; (2 morphometric data; (3 competition and (4 social position including liana loads. Despite the large variation in tree size and growth, we observed that these kinds of predictor variables described well the BAI in level of individual tree. The fitted mixed model achieve a high efficiency (R2=92.7 % and predicted 3-years BAI over bark for trees of Cedrela odorata ranging from 10 to 110 cm at diameter at breast height. Tree height, steam slenderness and crown formal demonstrated high influence in the BAI growth model and explaining most of the growth variance (Partial R2=87.2%. Competition variables had negative influence on the BAI, however, explained about 7% of the total variation. The introduction of a random parameter on the regressions model (mixed modelprocedure has demonstrated a better significance approach to the data observed and showed more realistic predictions than the fixed model.
Directory of Open Access Journals (Sweden)
Nancy Ames
2017-07-01
Full Text Available Background: Music listening may reduce the physiological, emotional, and mental effects of distress and anxiety. It is unclear whether music listening may reduce the amount of opioids used for pain management in critical care, postoperative patients or whether music may improve patient experience in the intensive care unit (ICU. Methods: A total of 41 surgical patients were randomized to either music listening or controlled non-music listening groups on ICU admission. Approximately 50-minute music listening interventions were offered 4 times per day (every 4-6 hours during the 48 hours of patients’ ICU stays. Pain, distress, and anxiety scores were measured immediately before and after music listening or controlled resting periods. Total opioid intake was recorded every 24 hours and during each intervention. Results: There was no significant difference in pain, opioid intake, distress, or anxiety scores between the control and music listening groups during the first 4 time points of the study. However, a mixed modeling analysis examining the pre- and post-intervention scores at the first time point revealed a significant interaction in the Numeric Rating Scale (NRS for pain between the music and the control groups ( P = .037. The Numeric Rating Score decreased in the music group but remained stable in the control group. Following discharge from the ICU, the music group’s interviews were analyzed for themes. Conclusions: Despite the limited sample size, this study identified music listening as an appropriate intervention that improved patients’ post-intervention experience, according to patients’ self-report. Future mixed methods studies are needed to examine both qualitative patient perspectives and methodology to improve music listening in critical care units.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Directory of Open Access Journals (Sweden)
Patricia Bourgault
Full Text Available This study evaluated the efficacy of the PASSAGE Program, a structured multicomponent interdisciplinary group intervention for the self-management of FMS.A mixed-methods randomized controlled trial (intervention (INT vs. waitlist (WL was conducted with patients suffering from FMS. Data were collected at baseline (T0, at the end of the intervention (T1, and 3 months later (T2. The primary outcome was change in pain intensity (0-10. Secondary outcomes were fibromyalgia severity, pain interference, sleep quality, pain coping strategies, depression, health-related quality of life, patient global impression of change (PGIC, and perceived pain relief. Qualitative group interviews with a subset of patients were also conducted. Complete data from T0 to T2 were available for 43 patients.The intervention had a statistically significant impact on the three PGIC measures. At the end of the PASSAGE Program, the percentages of patients who perceived overall improvement in their pain levels, functioning and quality of life were significantly higher in the INT Group (73%, 55%, 77% respectively than in the WL Group (8%, 12%, 20%. The same differences were observed 3 months post-intervention (Intervention group: 62%, 43%, 38% vs Waitlist Group: 13%, 13%, 9%. The proportion of patients who reported ≥ 50% pain relief was also significantly higher in the INT Group at the end of the intervention (36% vs 12% and 3 months post-intervention (33% vs 4%. Results of the qualitative analysis were in line with the quantitative findings regarding the efficacy of the intervention. The improvement, however, was not reflected in the primary outcome and other secondary outcome measures.The PASSAGE Program was effective in helping FMS patients gain a sense of control over their symptoms. We suggest including PGIC in future clinical trials on FMS as they appear to capture important aspects of the patients' experience.International Standard Randomized Controlled Trial Number
2014-09-18
22:477– 505 , 2007. [21] Bushnell, Michael L. and Vishwani Agrawal. Essentials of Electronic Testing for Digital, Memory, and Mixed-Signal VLSI...Investigation”. Knowledge and information systems, 10(4):453–472, 2006. [68] Liang, Nia -Chiang, Ping-Chieh Chen, Tony Sun, Guang Yang, Ling-Jyh Chen
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Directory of Open Access Journals (Sweden)
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships
Ganesh, Praveen; Murthy, Jyotsna; Ulaghanathan, Navitha; Savitha, V H
2015-07-01
To study the growth and speech outcomes in children who were operated on for unilateral cleft lip and palate (UCLP) by a single surgeon using two different treatment protocols. A total of 200 consecutive patients with nonsyndromic UCLP were randomly allocated to two different treatment protocols. Of the 200 patients, 179 completed the protocol. However, only 85 patients presented for follow-up during the mixed dentition period (7-10 years of age). The following treatment protocol was followed. Protocol 1 consisted of the vomer flap (VF), whereby patients underwent primary lip nose repair and vomer flap for hard palate single-layer closure, followed by soft palate repair 6 months later; Protocol 2 consisted of the two-flap technique (TF), whereby the cleft palate (CP) was repaired by two-flap technique after primary lip and nose repair. GOSLON Yardstick scores for dental arch relation, and speech outcomes based on universal reporting parameters, were noted. A total of 40 patients in the VF group and 45 in the TF group completed the treatment protocols. The GOSLON scores showed marginally better outcomes in the VF group compared to the TF group. Statistically significant differences were found only in two speech parameters, with better outcomes in the TF group. Our results showed marginally better growth outcome in the VF group compared to the TF group. However, the speech outcomes were better in the TF group. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Miller, David C; Sullivan, Amy M; Soffler, Morgan; Armstrong, Brett; Anandaiah, Asha; Rock, Laura; McSparron, Jakob I; Schwartzstein, Richard M; Hayes, Margaret M
2018-01-01
We present a pilot study exploring the effects of a brief, 30-minute educational intervention targeting resident communication surrounding dying in the intensive care unit (ICU). We sought to determine whether simulation or didactic educational interventions improved resident-reported comfort, preparation, and skill acquisition. We also sought to identify resident barriers to using the word "dying." In this mixed-methods prospective study, second- and third-year medical residents were randomized to participate in a simulation-based communication training or a didactic session. Residents completed a pre-post survey after the sessions evaluating the sessions and reflecting on their use of the word "dying" in family meetings. Forty-five residents participated in the study. Residents reported increases in comfort (Mean [M]-pre = 3.3 [standard deviation: 0.6], M-post = 3.7 [0.7]; P educational intervention improves internal medicine residents' self-reported comfort and preparation in talking about death and dying in the ICU. Residents in simulation-based training were more likely to report they learned new skills as compared to the didactic session. Residents report multiple barriers to using the word "dying" EOL conversations.
International Nuclear Information System (INIS)
Zamuner, Stefano; Gomeni, Roberto; Bye, Alan
2002-01-01
Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach
Sajid, Muhammad Shafique; Bhatti, Muhammad I; Miles, William Fa
2015-05-01
The objective of this article is to systematically analyse the randomized, controlled trials comparing the effectiveness of purse-string closure (PSC) of an ileostomy wound with conventional linear closure (CLC). Randomized, controlled trials comparing the effectiveness of purse-string closure vs conventional linear closure (CLC) of ileostomy wound in patients undergoing ileostomy closure were analysed using RevMan®, and the combined outcomes were expressed as risk ratio (RR) and standardized mean difference (SMD). Three randomized, controlled trials, recruiting 206 patients, were retrieved from medical electronic databases. There were 105 patients in the PSC group and 101 patients in the CLC group. There was no heterogeneity among included trials. Duration of operation (SMD: -0.18; 95% CI: -0.45, 0.09; z = 1.28; P SMD: 0.01; 95% CI: -0.26, 0.28; z = 0.07; P infection (OR, 0.10; 95% CI: 0.03, 0.33; z = 3.78; P infection apparently without influencing the duration of operation and length of hospital stay. © The Author(s) 2014. Published by Oxford University Press and the Digestive Science Publishing Co. Limited.
International Nuclear Information System (INIS)
Capozzoli, Alfonso; Piscitelli, Marco Savino; Neri, Francesco; Grassi, Daniele; Serale, Gianluca
2016-01-01
Highlights: • 100 Healthcare Centres were analyzed to assess energy consumption reference values. • A novel robust methodology for energy benchmarking process was proposed. • A Linear Mixed Effect estimation Model was used to treat heterogeneous datasets. • A nondeterministic approach was adopted to consider the uncertainty in the process. • The methodology was developed to be upgradable and generalizable to other datasets. - Abstract: The current EU energy efficiency directive 2012/27/EU defines the existing building stocks as one of the most promising potential sector for achieving energy saving. Robust methodologies aimed to quantify the potential reduction of energy consumption for large building stocks need to be developed. To this purpose, a benchmarking analysis is necessary in order to support public planners in determining how well a building is performing, in setting credible targets for improving performance or in detecting abnormal energy consumption. In the present work, a novel methodology is proposed to perform a benchmarking analysis particularly suitable for heterogeneous samples of buildings. The methodology is based on the estimation of a statistical model for energy consumption – the Linear Mixed Effects Model –, so as to account for both the fixed effects shared by all individuals within a dataset and the random effects related to particular groups/classes of individuals in the population. The groups of individuals within the population have been classified by resorting to a supervised learning technique. Under this backdrop, a Monte Carlo simulation is worked out to compute the frequency distribution of annual energy consumption and identify a reference value for each group/class of buildings. The benchmarking analysis was tested for a case study of 100 out-patient Healthcare Centres in Northern Italy, finally resulting in 12 different frequency distributions for space and Domestic Hot Water heating energy consumption, one for
Wang, Haohan; Aragam, Bryon; Xing, Eric P
2018-04-26
A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.
International Nuclear Information System (INIS)
Megow, Joerg; Kulesza, Alexander; Qu Zhengwang; Ronneberg, Thomas; Bonacic-Koutecky, Vlasta; May, Volkhard
2010-01-01
Graphical abstract: Structure of a single Pheo (green: C-atoms, blue: N-atoms, red; O-atoms, light grey: H-atoms). - Abstract: Linear absorption spectra of a single Pheophorbid-a molecule (Pheo) dissolved in ethanol are calculated in a mixed quantum-classical approach. In this computational scheme the absorbance is mainly determined by the time-dependent fluctuations of the energy gap between the Pheo ground and excited electronic state. The actual magnitude of the energy gap is caused by the electrostatic solvent solute coupling as well as by contributions due to intra Pheo vibrations. For the latter a new approach is proposed which is based on precalculated potential energy surfaces (PES) described in a harmonic approximation. To get the respective nuclear equilibrium configurations and Hessian matrices of the two involved electronic states we carried out the necessary electronic structure calculations in a DFT-framework. Since the Pheo changes its spatial orientation in the course of a MD run, the nuclear equilibrium configurations change their spatial position, too. Introducing a particular averaging procedure, these configurations are determined from the actual MD trajectories. The usability of the approach is underlined by a perfect reproduction of experimental data. This also demonstrates that our proposed method is suitable for the description of more complex systems in future investigations.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-03-13
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
International Nuclear Information System (INIS)
Wouters, Carmen; Fraga, Eric S.; James, Adrian M.
2015-01-01
The integration of distributed generation units and microgrids in the current grid infrastructure requires an efficient and cost effective local energy system design. A mixed-integer linear programming model is presented to identify such optimal design. The electricity as well as the space heating and cooling demands of a small residential neighbourhood are satisfied through the consideration and combined use of distributed generation technologies, thermal units and energy storage with an optional interconnection with the central grid. Moreover, energy integration is allowed in the form of both optimised pipeline networks and microgrid operation. The objective is to minimise the total annualised cost of the system to meet its yearly energy demand. The model integrates the operational characteristics and constraints of the different technologies for several scenarios in a South Australian setting and is implemented in GAMS. The impact of energy integration is analysed, leading to the identification of key components for residential energy systems. Additionally, a multi-microgrid concept is introduced to allow for local clustering of households within neighbourhoods. The robustness of the model is shown through sensitivity analysis, up-scaling and an effort to address the variability of solar irradiation. - Highlights: • Distributed energy system planning is employed on a small residential scale. • Full energy integration is employed based on microgrid operation and tri-generation. • An MILP for local clustering of households in multi-microgrids is developed. • Micro combined heat and power units are key components for residential microgrids
International Nuclear Information System (INIS)
Schurkus, Henry F.; Ochsenfeld, Christian
2016-01-01
An atomic-orbital (AO) reformulation of the random-phase approximation (RPA) correlation energy is presented allowing to reduce the steep computational scaling to linear, so that large systems can be studied on simple desktop computers with fully numerically controlled accuracy. Our AO-RPA formulation introduces a contracted double-Laplace transform and employs the overlap-metric resolution-of-the-identity. First timings of our pilot code illustrate the reduced scaling with systems comprising up to 1262 atoms and 10 090 basis functions.
Directory of Open Access Journals (Sweden)
Kim M Huffman
Full Text Available To determine if caloric restriction (CR would cause changes in plasma metabolic intermediates in response to a mixed meal, suggestive of changes in the capacity to adapt fuel oxidation to fuel availability or metabolic flexibility, and to determine how any such changes relate to insulin sensitivity (S(I.Forty-six volunteers were randomized to a weight maintenance diet (Control, 25% CR, or 12.5% CR plus 12.5% energy deficit from structured aerobic exercise (CR+EX, or a liquid calorie diet (890 kcal/d until 15% reduction in body weightfor six months. Fasting and postprandial plasma samples were obtained at baseline, three, and six months. A targeted mass spectrometry-based platform was used to measure concentrations of individual free fatty acids (FFA, amino acids (AA, and acylcarnitines (AC. S(I was measured with an intravenous glucose tolerance test.Over three and six months, there were significantly larger differences in fasting-to-postprandial (FPP concentrations of medium and long chain AC (byproducts of FA oxidation in the CR relative to Control and a tendency for the same in CR+EX (CR-3 month P = 0.02; CR-6 month P = 0.002; CR+EX-3 month P = 0.09; CR+EX-6 month P = 0.08. After three months of CR, there was a trend towards a larger difference in FPP FFA concentrations (P = 0.07; CR-3 month P = 0.08. Time-varying differences in FPP concentrations of AC and AA were independently related to time-varying S(I (P<0.05 for both.Based on changes in intermediates of FA oxidation following a food challenge, CR imparted improvements in metabolic flexibility that correlated with improvements in S(I.ClinicalTrials.gov NCT00099151.
Fojecki, Grzegorz Lukasz; Tiessen, Stefan; Osther, Palle Jørn Sloth
2018-03-01
Short-term data on the effect of low-intensity extracorporeal shockwave therapy (Li-ESWT) on erectile dysfunction (ED) have been inconsistent. The suggested mechanisms of action of Li-ESWT on ED include stimulation of cell proliferation, tissue regeneration, and angiogenesis, which can be processes with a long generation time. Therefore, long-term data on the effect of Li-ESWT on ED are strongly warranted. To assess the outcome at 6 and 12 months of linear Li-ESWT on ED from a previously published randomized, double-blinded, sham-controlled trial. Subjects with ED (N = 126) who scored lower than 25 points in the erectile function domain of the International Index of Erectile Function (IIEF-EF) were eligible for the study. They were allocated to 1 of 2 groups: 5 weekly sessions of sham treatment (group A) or linear Li-ESWT (group B). After a 4-week break, the 2 groups received active treatment once a week for 5 weeks. At baseline and 6 and 12 months, subjects were evaluated by the IIEF-EF, the Erectile Hardness Scale (EHS), and the Sexual Quality of Life in Men. The primary outcome measure was an increase of at least 5 points in the IIEF-EF (ΔIIEF-EF score). The secondary outcome measure was an increase in the EHS score to at least 3 in men with a score no higher than 2 at baseline. Data were analyzed by linear and logistic regressions. Linear regression of the ΔIIEF-EF score from baseline to 12 months included 95 patients (dropout rate = 25%). Adjusted for the IIEF-EF score at baseline, the difference between groups B and A was -1.30 (95% CI = -4.37 to 1.77, P = .4). The success rate based on the main outcome parameter (ΔIIEF-EF score ≥ 5) was 54% in group A vs 47% in group B (odds ratio = 0.67, P = .28). Improvement based on changes in the EHS score in groups A and B was 34% and 24%, respectively (odds ratio = 0.47, P = .82). Exposure to 2 cycles of linear Li-ESWT for ED is not superior to 1 cycle at 6- and 12-month follow-ups. Fojecki GL, Tiessen S
Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi
2017-08-01
This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.
Directory of Open Access Journals (Sweden)
G. Ibarra-Berastegi
2011-06-01
Full Text Available In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain during the 1961–2001 period. Three types of downscaling models have been built: (i analogues, (ii analogues followed by random forests and (iii analogues followed by multiple linear regression. The inputs consist of data (predictor fields taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961–1996 and a test period (1997–2001, where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.
International Nuclear Information System (INIS)
Reshak, A. H.; Brik, M. G.; Auluck, S.
2014-01-01
Based on the electronic band structure, we have calculated the dispersion of the linear and nonlinear optical susceptibilities for the mixed CuAl(S 1−x Se x ) 2 chaclcopyrite compounds with x = 0.0, 0.25, 0.5, 0.75, and 1.0. Calculations are performed within the Perdew-Becke-Ernzerhof general gradient approximation. The investigated compounds possess a direct band gap of about 2.2 eV (CuAlS 2 ), 1.9 eV (CuAl(S 0.75 Se 0.25 ) 2 ), 1.7 eV (CuAl(S 0.5 Se 0.5 ) 2 ), 1.5 eV (CuAl(S 0.25 Se 0.75 ) 2 ), and 1.4 eV (CuAlSe 2 ) which tuned to make them optically active for the optoelectronics and photovoltaic applications. These results confirm that substituting S by Se causes significant band gaps' reduction. The optical function's dispersion ε 2 xx (ω) and ε 2 zz (ω)/ε 2 xx (ω), ε 2 yy (ω), and ε 2 zz (ω) was calculated and discussed in detail. To demonstrate the effect of substituting S by Se on the complex second-order nonlinear optical susceptibility tensors, we performed detailed calculations for the complex second-order nonlinear optical susceptibility tensors, which show that the neat parents compounds CuAlS 2 and CuAlSe 2 exhibit | χ 123 (2) (−2ω;ω;ω) | as the dominant component, while the mixed alloys exhibit | χ 111 (2) (−2ω;ω;ω) | as the dominant component. The features of | χ 123 (2) (−2ω;ω;ω) | and | χ 111 (2) (−2ω;ω;ω) | spectra were analyzed on the basis of the absorptive part of the corresponding dielectric function ε 2 (ω) as a function of both ω/2 and ω.
DEFF Research Database (Denmark)
Fojecki, Grzegorz Lukasz; Tiessen, Stefan; Sloth Osther, Palle Jørn
2018-01-01
-EF (ΔIIEF-EF score). The secondary outcome measure was an increase in the EHS score to at least 3 in men with a score no higher than 2 at baseline. Data were analyzed by linear and logistic regressions. RESULTS: Linear regression of the ΔIIEF-EF score from baseline to 12 months included 95 patients (dropout......INTRODUCTION: Short-term data on the effect of low-intensity extracorporeal shockwave therapy (Li-ESWT) on erectile dysfunction (ED) have been inconsistent. The suggested mechanisms of action of Li-ESWT on ED include stimulation of cell proliferation, tissue regeneration, and angiogenesis, which...... can be processes with a long generation time. Therefore, long-term data on the effect of Li-ESWT on ED are strongly warranted. AIM: To assess the outcome at 6 and 12 months of linear Li-ESWT on ED from a previously published randomized, double-blinded, sham-controlled trial. METHODS: Subjects with ED...
Directory of Open Access Journals (Sweden)
P.D.Gujrati
2002-01-01
Full Text Available Theoretical evidence is presented in this review that architectural aspects can play an important role, not only in the bulk but also in confined geometries by using our recursive lattice theory, which is equally applicable to fixed architectures (regularly branched polymers, stars, dendrimers, brushes, linear chains, etc. and variable architectures, i.e. randomly branched structures. Linear chains possess an inversion symmetry (IS of a magnetic system (see text, whose presence or absence determines the bulk phase diagram. Fixed architectures possess the IS and yield a standard bulk phase diagram in which there exists a theta point at which two critical lines C and C' meet and the second virial coefficient A2 vanishes. The critical line C appears only for infinitely large polymers, and an order parameter is identified for this criticality. The critical line C' exists for polymers of all sizes and represents phase separation criticality. Variable architectures, which do not possess the IS, give rise to a topologically different phase diagram with no theta point in general. In confined regions next to surfaces, it is not the IS but branching and monodispersity, which becomes important in the surface regions. We show that branching plays no important role for polydisperse systems, but become important for monodisperse systems. Stars and linear chains behave differently near a surface.
Directory of Open Access Journals (Sweden)
Lynn Cialdella-Kam
2016-05-01
Full Text Available Flavonoids and fish oils have anti-inflammatory and immune-modulating influences. The purpose of this study was to determine if a mixed flavonoid-fish oil supplement (Q-Mix; 1000 mg quercetin, 400 mg isoquercetin, 120 mg epigallocatechin (EGCG from green tea extract, 400 mg n3-PUFAs (omega-3 polyunsaturated fatty acid (220 mg eicosapentaenoic acid (EPA and 180 mg docosahexaenoic acid (DHA from fish oil, 1000 mg vitamin C, 40 mg niacinamide, and 800 µg folic acid would reduce complications associated with obesity; that is, reduce inflammatory and oxidative stress markers and alter genomic profiles in overweight women. Overweight and obese women (n = 48; age = 40–70 years were assigned to Q-Mix or placebo groups using randomized double-blinded placebo-controlled procedures. Overnight fasted blood samples were collected at 0 and 10 weeks and analyzed for cytokines, C-reactive protein (CRP, F2-isoprostanes, and whole-blood-derived mRNA, which was assessed using Affymetrix HuGene-1_1 ST arrays. Statistical analysis included two-way ANOVA models for blood analytes and gene expression and pathway and network enrichment methods for gene expression. Plasma levels increased with Q-Mix supplementation by 388% for quercetin, 95% for EPA, 18% for DHA, and 20% for docosapentaenoic acid (DPA. Q-Mix did not alter plasma levels for CRP (p = 0.268, F2-isoprostanes (p = 0.273, and cytokines (p > 0.05. Gene set enrichment analysis revealed upregulation of pathways in Q-Mix vs. placebo related to interferon-induced antiviral mechanism (false discovery rate, FDR < 0.001. Overrepresentation analysis further disclosed an inhibition of phagocytosis-related inflammatory pathways in Q-Mix vs. placebo. Thus, a 10-week Q-Mix supplementation elicited a significant rise in plasma quercetin, EPA, DHA, and DPA, as well as stimulated an antiviral and inflammation whole-blood transcriptomic response in overweight women.
Klein, Jens; Lüdecke, Daniel; Hofreuter-Gätgens, Kerstin; Fisch, Margit; Graefen, Markus; von dem Knesebeck, Olaf
2017-09-01
To examine income-related disparities in health-related quality of life (HRQOL) over a one-year period after surgery (radical prostatectomy) and its contributory factors in a longitudinal perspective. Evidence of associations between income and HRQOL among patients with prostate cancer (PCa) is sparse and their explanations still remain unclear. 246 males of two German hospitals filled out a questionnaire at the time of acute treatment, 6 and 12 months later. Age, partnership status, baseline disease and treatment factors, physical and psychological comorbidities, as well as treatment factors and adverse effects at follow-up were additionally included in the analyses to explain potential disparities. HRQOL was assessed with the EORTC (European Organisation for Research and Treatment of Cancer) QLQ-C30 core questionnaire and the prostate-specific QLQ-PR25. A linear mixed model for repeated measures was calculated. The fixed effects showed highly significant income-related inequalities regarding the majority of HRQOL scales. Less affluent PCa patients reported lower HRQOL in terms of global quality of life, all functional scales and urinary symptoms. After introducing relevant covariates, some associations became insignificant (physical, cognitive and sexual function), while others only showed reduced estimates (global quality of life, urinary symptoms, role, emotional and social function). In particular, mental disorders/psychological comorbidity played a relevant role in the explanation of income-related disparities. One year after surgery, income-related disparities in various dimensions of HRQOL persist. With respect to economically disadvantaged PCa patients, the findings emphasize the importance of continuous psychosocial screening and tailored interventions, of patients' empowerment and improved access to supportive care.
Pan, Guangming; Wang, Shaochen; Zhou, Wang
2017-10-01
In this paper, we consider the asymptotic behavior of Xfn (n )≔∑i=1 nfn(xi ) , where xi,i =1 ,…,n form orthogonal polynomial ensembles and fn is a real-valued, bounded measurable function. Under the condition that Var Xfn (n )→∞ , the Berry-Esseen (BE) bound and Cramér type moderate deviation principle (MDP) for Xfn (n ) are obtained by using the method of cumulants. As two applications, we establish the BE bound and Cramér type MDP for linear spectrum statistics of Wigner matrix and sample covariance matrix in the complex cases. These results show that in the edge case (which means fn has a particular form f (x ) I (x ≥θn ) where θn is close to the right edge of equilibrium measure and f is a smooth function), Xfn (n ) behaves like the eigenvalues counting function of the corresponding Wigner matrix and sample covariance matrix, respectively.
McLaren, A; Mucha, S; Mrode, R; Coffey, M; Conington, J
2016-07-01
Conformation traits are of interest to many dairy goat breeders not only as descriptive traits in their own right, but also because of their influence on production, longevity, and profitability. If these traits are to be considered for inclusion in future dairy goat breeding programs, relationships between them and production traits such as milk yield must be considered. With the increased use of regression models to estimate genetic parameters, an opportunity now exists to investigate correlations between conformation traits and milk yield throughout lactation in more detail. The aims of this study were therefore to (1) estimate genetic parameters for conformation traits in a population of crossbred dairy goats, (2) estimate correlations between all conformation traits, and (3) assess the relationship between conformation traits and milk yield throughout lactation. No information on milk composition was available. Data were collected from goats based on 2 commercial goat farms during August and September in 2013 and 2014. Ten conformation traits, relating to udder, teat, leg, and feet characteristics, were scored on a linear scale (1-9). The overall data set comprised data available for 4,229 goats, all in their first lactation. The population of goats used in the study was created using random crossings between 3 breeds: British Alpine, Saanen, and Toggenburg. In each generation, the best performing animals were selected for breeding, leading to the formation of a synthetic breed. The pedigree file used in the analyses contained sire and dam information for a total of 30,139 individuals. The models fitted relevant fixed and random effects. Heritability estimates for the conformation traits were low to moderate, ranging from 0.02 to 0.38. A range of positive and negative phenotypic and genetic correlations between the traits were observed, with the highest correlations found between udder depth and udder attachment (0.78), teat angle and teat placement (0
Pereira, J. R. V.; Tunes, T. M.; de Arruda, A. S.; Godoy, M.
2018-06-01
In this work, we have performed Monte Carlo simulations to study a mixed spin-1 and spin-3/2 Ising ferrimagnetic system on a square lattice with two different random single-ion anisotropies. This lattice is divided in two interpenetrating sublattices with spins SA = 1 in the sublattice A and SB = 3 / 2 in the sublattice B. The exchange interaction between the spins on the sublattices is antiferromagnetic (J single-ion anisotropies, DiA and DjB , on the sublattices A and B, respectively. We have determined the phase diagram of the model in the critical temperature Tc versus strength of the random single-ion anisotropy D plane and we shown that it exhibits only second-order phase transition lines. We also shown that this system displays compensation temperatures for some cases of the random single-ion distribution.
Tarasevich, Yuri Yu.; Laptev, Valeri V.; Goltseva, Valeria A.; Lebovka, Nikolai I.
2017-07-01
The effect of defects on the behaviour of electrical conductivity, σ, in a monolayer produced by the random sequential adsorption of linear k-mers (particles occupying k adjacent sites) onto a square lattice is studied by means of a Monte Carlo simulation. The k-mers are deposited on the substrate until a jamming state is reached. The presence of defects in the lattice (impurities) and of defects in the k-mers with concentrations of dl and dk, respectively, is assumed. The defects in the lattice are distributed randomly before deposition and these lattice sites are forbidden for the deposition of k-mers. The defects of the k-mers are distributed randomly on the deposited k-mers. The sites filled with k-mers have high electrical conductivity, σk, whereas the empty sites, and the sites filled by either types of defect have a low electrical conductivity, σl, i.e., a high-contrast, σk /σl ≫ 1, is assumed. We examined isotropic (both the possible x and y orientations of a particle are equiprobable) and anisotropic (all particles are aligned along one given direction, y) deposition. To calculate the effective electrical conductivity, the monolayer was presented as a random resistor network and the Frank-Lobb algorithm was used. The effects of the concentrations of defects dl and dk on the electrical conductivity for the values of k =2n, where n = 1 , 2 , … , 5, were studied. Increase of both the dl and dk parameters values resulted in decreases in the value of σ and the suppression of percolation. Moreover, for anisotropic deposition the electrical conductivity along the y direction was noticeably larger than in the perpendicular direction, x. Phase diagrams in the (dl ,dk)-plane for different values of k were obtained.
Directory of Open Access Journals (Sweden)
Pablo Martinez-Martín
Full Text Available To estimate the magnitude in which Parkinson's disease (PD symptoms and health- related quality of life (HRQoL determined PD costs over a 4-year period.Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL.One-hundred and seventy four patients were included. Mean (SD age: 63 (11 years, mean (SD disease duration: 8 (6 years. Ninety-three percent were HY I, II or III (mild or moderate disease. Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD PD total costs augmented 92.5%, from € 2,082.17 (€ 2,889.86 in year 1 to € 4,008.6 (€ 7,757.35 in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased € 75.72 (p = 0.0174; for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented € 49.21 (p = 0.0094 and € 44.81 (p = 0.0404, respectively; and for each extra point on the pain scale, indirect costs increased € 16.31 (p = 0.0228.PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses.
Zhang, Zhongzhi; Dong, Yuze; Sheng, Yibin
2015-10-01
Random walks including non-nearest-neighbor jumps appear in many real situations such as the diffusion of adatoms and have found numerous applications including PageRank search algorithm; however, related theoretical results are much less for this dynamical process. In this paper, we present a study of mixed random walks in a family of fractal scale-free networks, where both nearest-neighbor and next-nearest-neighbor jumps are included. We focus on trapping problem in the network family, which is a particular case of random walks with a perfect trap fixed at the central high-degree node. We derive analytical expressions for the average trapping time (ATT), a quantitative indicator measuring the efficiency of the trapping process, by using two different methods, the results of which are consistent with each other. Furthermore, we analytically determine all the eigenvalues and their multiplicities for the fundamental matrix characterizing the dynamical process. Our results show that although next-nearest-neighbor jumps have no effect on the leading scaling of the trapping efficiency, they can strongly affect the prefactor of ATT, providing insight into better understanding of random-walk process in complex systems.
Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo
2017-05-01
The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.
Kullgren, Jeffrey T.; Harkins, Kristin A.; Bellamy, Scarlett L.; Gonzales, Amy; Tao, Yuanyuan; Zhu, Jingsan; Volpp, Kevin G.; Asch, David A.; Heisler, Michele; Karlawish, Jason
2014-01-01
Background: Financial incentives and peer networks could be delivered through eHealth technologies to encourage older adults to walk more. Methods: We conducted a 24-week randomized trial in which 92 older adults with a computer and Internet access received a pedometer, daily walking goals, and weekly feedback on goal achievement. Participants…
Arch, Joanna J.; Eifert, Georg H.; Davies, Carolyn; Vilardaga, Jennifer C. Plumb; Rose, Raphael D.; Craske, Michelle G.
2012-01-01
Objective: Randomized comparisons of acceptance-based treatments with traditional cognitive behavioral therapy (CBT) for anxiety disorders are lacking. To address this gap, we compared acceptance and commitment therapy (ACT) to CBT for heterogeneous anxiety disorders. Method: One hundred twenty-eight individuals (52% female, mean age = 38, 33%…
Directory of Open Access Journals (Sweden)
Xiangwei Zhu
2015-12-01
Full Text Available A novel random copolymer based on donor–acceptor type polymers containing benzodithiophene and dithienosilole as donors and benzothiazole and diketopyrrolopyrrole as acceptors was designed and synthesized by Stille copolymerization, and their optical, electrochemical, charge transport, and photovoltaic properties were investigated. This copolymer with high molecular weight exhibited broad and strong absorption covering the spectra range from 500 to 800 nm with absorption maxima at around 750 nm, which would be very conducive to obtaining large short-circuits current densities. Unlike the general approach using single solvent to prepare the active layer film, mixed solvents were introduced to change the film feature and improve the morphology of the active layer, which lead to a significant improvement of the power conversion efficiency. These results indicate that constructing random copolymer with multiple donor and acceptor monomers and choosing proper mixed solvents to change the characteristics of the film is a very promising way for manufacturing organic solar cells with large current density and high power conversion efficiency.
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
Takeuchi, Nobuyuki; Takezako, Nobuhiro; Shimonishi, Yuko; Usuda, Shigeru
2017-08-01
[Purpose] The purpose of this study was to clarify the influence of high-intensity pulse irradiation with linear polarized near-infrared rays (HI-LPNR) and stretching on hypertonia in cerebrovascular disease patients. [Subjects and Methods] The subjects were 40 cerebrovascular disease patients with hypertonia of the ankle joint plantar flexor muscle. The subjects were randomly allocated to groups undergoing treatment with HI-LPNR irradiation (HI-LPNR group), stretching (stretching group), HI-LPNR irradiation followed by stretching (combination group), and control group (10 subjects each). In all groups, the passive range of motion of ankle dorsiflexion and passive resistive joint torque of ankle dorsiflexion were measured before and after the specified intervention. [Results] The changes in passive range of motion, significant increase in the stretching and combination groups compared with that in the control group. The changes in passive resistive joint torque, significant decrease in HI-LPNR, stretching, and combination groups compared with that in the control group. [Conclusion] HI-LPNR irradiation and stretching has effect of decrease muscle tone. However, combination of HI-LPNR irradiation and stretching has no multiplier effect.
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
Liesegang bands versus random crystallites in Ag2Cr2O7 - Single and mixed gelled media
Ibrahim, Huria; El-Rassy, Houssam; Sultan, Rabih
2018-02-01
Liesegang patterns of silver dichromate (Ag2Cr2O7) are studied in two different gel media: agar and gelatin, based on the work of Lagzi and Ueyama (2009). Whereas in gelatin, standard Liesegang bands are obtained as a result of the interdiffusion of Ag+ and Cr2 O72-, random crystallites with dendritic ramifications are observed in agar. We revisit this phenomenon and demonstrate the proposed mechanism, wherein dense heterogeneous nucleation in gelatin leads to Liesegang bands, as opposed to surface nucleation in agar yielding crystallites. We use viscosity, pH measurements, and notably scanning electron microscopy (SEM) in this endeavor.
Su, Qing; Liu, Chao; Zheng, Hongting; Zhu, Jun; Li, Peng Fei; Qian, Lei; Yang, Wen Ying
2017-06-01
Premixed insulins are recommended starter insulins in Chinese patients after oral antihyperglycemic medication (OAM) failure. In the present study, we compared the efficacy and safety of insulin lispro mix 25 (LM25) twice daily (b.i.d.) and insulin lispro mix 50 (LM50) b.i.d. as a starter insulin regimen in Chinese patients with type 2 diabetes mellitus (T2DM) who had inadequate glycemic control with OAMs. The primary efficacy outcome in the present open-label parallel randomized clinical trial was change in HbA1c from baseline to 26 weeks. Patients were randomized in a ratio of 1: 1 to LM25 (n = 80) or LM50 (n = 76). A mixed-effects model with repeated measures was used to analyze continuous variables. The Cochran-Mantel-Haenszel test with stratification factor was used to analyze categorical variables. At the end of the study, LM50 was more efficacious than LM25 in reducing mean HbA1c levels (least-squares [LS] mean difference 0.48; 95 % confidence interval [CI] 0.22, 0.74; P 1). More subjects in the LM50 than LM25 group achieved HbA1c targets of 1) or ≤6.5 % (52.6 % vs 20.0 %; P 1). Furthermore, LM50 was more effective than LM25 at reducing HbA1c in patients with baseline HbA1c, blood glucose excursion, and postprandial glucose greater than or equal to median levels (P ≤ 0.001). The rate and incidence of hypoglycemic episodes and increase in weight at the end of the study were similar between treatment groups. In Chinese patients with T2DM, LM50 was more efficacious than LM25 as a starter insulin. © 2016 The Authors. Journal of Diabetes published by John Wiley & Sons Australia, Ltd and Ruijin Hospital, Shanghai Jiaotong University School of Medicine.
Lorenz, Jana
2018-01-01
Background Goal setting is among the most common behavioral change techniques employed in contemporary self-tracking apps. For these techniques to be effective, it is relevant to understand how the visual presentation of goal-related outcomes employed in the app design affects users’ responses to their self-tracking outcomes. Objective This study examined whether a spatially close (vs distant) presentation of mixed positive and negative self-tracking outcomes from multiple domains (ie, activity, diet) on a digital device’s screen can provide users the opportunity to hedonically edit their self-tracking outcome profile (ie, to view their mixed self-tracking outcomes in the most positive light). Further, this study examined how the opportunity to hedonically edit one’s self-tracking outcome profile relates to users’ future health behavior intentions. Methods To assess users’ responses to a spatially close (vs distant) presentation of a mixed-gain (vs mixed-loss) self-tracking outcome profile, a randomized 2×2 between-subjects online experiment with a final sample of 397 participants (mean age 27.4, SD 7.2 years; 71.5%, 284/397 female) was conducted in Germany. The experiment started with a cover story about a fictitious self-tracking app. Thereafter, participants saw one of four manipulated self-tracking outcome profiles. Variables of interest measured were health behavior intentions, compensatory health beliefs, health motivation, and recall of the outcome profile. We analyzed data using chi-square tests (SPSS version 23) and moderated mediation analyses with the PROCESS macro 2.16.1. Results Spatial distance facilitated hedonic editing, which was indicated by systematic memory biases in users’ recall of positive and negative self-tracking outcomes. In the case of a mixed-gain outcome profile, a spatially close (vs distant) presentation tended to increase the underestimation of the negative outcome (P=.06). In the case of a mixed-loss outcome profile, a
Shim, Jaemin; Hwang, Minki; Song, Jun-Seop; Lim, Byounghyun; Kim, Tae-Hoon; Joung, Boyoung; Kim, Sung-Hwan; Oh, Yong-Seog; Nam, Gi-Byung; On, Young Keun; Oh, Seil; Kim, Young-Hoon; Pak, Hui-Nam
2017-01-01
Objective: Radiofrequency catheter ablation for persistent atrial fibrillation (PeAF) still has a substantial recurrence rate. This study aims to investigate whether an AF ablation lesion set chosen using in-silico ablation (V-ABL) is clinically feasible and more effective than an empirically chosen ablation lesion set (Em-ABL) in patients with PeAF. Methods: We prospectively included 108 patients with antiarrhythmic drug-resistant PeAF (77.8% men, age 60.8 ± 9.9 years), and randomly assigned them to the V-ABL ( n = 53) and Em-ABL ( n = 55) groups. Five different in-silico ablation lesion sets [1 pulmonary vein isolation (PVI), 3 linear ablations, and 1 electrogram-guided ablation] were compared using heart-CT integrated AF modeling. We evaluated the feasibility, safety, and efficacy of V-ABL compared with that of Em-ABL. Results: The pre-procedural computing time for five different ablation strategies was 166 ± 11 min. In the Em-ABL group, the earliest terminating blinded in-silico lesion set matched with the Em-ABL lesion set in 21.8%. V-ABL was not inferior to Em-ABL in terms of procedure time ( p = 0.403), ablation time ( p = 0.510), and major complication rate ( p = 0.900). During 12.6 ± 3.8 months of follow-up, the clinical recurrence rate was 14.0% in the V-ABL group and 18.9% in the Em-ABL group ( p = 0.538). In Em-ABL group, clinical recurrence rate was significantly lower after PVI+posterior box+anterior linear ablation, which showed the most frequent termination during in-silico ablation (log-rank p = 0.027). Conclusions: V-ABL was feasible in clinical practice, not inferior to Em-ABL, and predicts the most effective ablation lesion set in patients who underwent PeAF ablation.
Szegedi, Armin; Durgam, Suresh; Mackle, Mary; Yu, Sung Yun; Wu, Xiao; Mathews, Maju; Landbloom, Ronald P
2018-01-01
The authors determined the efficacy and safety of asenapine in preventing recurrence of any mood episode in adults with bipolar I disorder. Adults with an acute manic or mixed episode per DSM-IV-TR criteria were enrolled in this randomized, placebo-controlled trial consisting of an initial 12- to 16-week open-label period and a 26-week double-blind randomized withdrawal period. The target asenapine dosage was 10 mg b.i.d. in the open-label period but could be titrated down to 5 mg b.i.d. After completing the open-label period, subjects meeting stabilization/stable-responder criteria were randomized to asenapine or placebo treatment in the double-blind period. The primary efficacy endpoint was time to recurrence of any mood event during the double-blind period. Kaplan-Meier estimation was performed, and 95% confidence intervals were determined. Safety was assessed throughout. A total of 549 subjects entered the open-label period, of whom 253 enrolled in the double-blind randomized withdrawal period (127 in the placebo group; 126 in the asenapine group). Time to recurrence of any mood episode was statistically significantly longer for asenapine- than placebo-treated subjects. In post hoc analyses, significant differences in favor of asenapine over placebo were seen in time to recurrence of manic and depressive episodes. The most common treatment-emergent adverse events were somnolence (10.0%), akathisia (7.7%), and sedation (7.7%) in the open-label period and mania (11.9% of the placebo group compared with 4.0% of the asenapine group) and bipolar I disorder (6.3% compared with 1.6%) in the double-blind period. Long-term treatment with asenapine was more effective than placebo in preventing recurrence of mood events in adults with bipolar I disorder and was generally well-tolerated.
Hu, Xiaoliang; Jiang, Jingzhou; Ma, Yuedong; Tang, Anli
2016-04-15
The benefits and risks of additional left atrium (LA) linear ablation in patients with paroxysmal atrial fibrillation (AF) remain unclear. Randomized controlled trials were identified in the PubMed, Web of Science, Embase and Cochrane databases, and the relevant papers were examined. Pooled relative risks (RR) and 95% confidence interval (95% CI) were estimated using random effects models. The primary endpoint was the maintenance of sinus rhythm after a single ablation. Nine randomized controlled trials involving 1138 patients were included in this analysis. Additional LA linear ablation did not improve the maintenance of the sinus rhythm following a single procedure (RR, 1.03; 95% CI, 0.93-1.13; P=0.60). A subgroup analysis demonstrated that all methods of additional linear ablation failed to improve the outcome. Additional linear ablation significantly increased the mean procedural time (166.53±67.7 vs. 139.57±62.44min, Plinear ablation did not exhibit any benefits in terms of sinus rhythm maintenance for paroxysmal AF patients following a single procedure. Additional linear ablation significantly increased the mean procedural, fluoroscopy and RF application times. This additional ablation was not associated with a statistically significant increase in complication rates. This finding must be confirmed by further large, high-quality clinical trials. Copyright © 2016. Published by Elsevier Ireland Ltd.
Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian
2018-04-13
Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.
Scala, Marco; Mereu, Paola; Spagnolo, Francesco; Massa, Michela; Barla, Annalisa; Mosci, Sofia; Forno, Gilberto; Ingenito, Andra; Strada, Paolo
2014-01-01
Salivary gland tumors are mostly benign tumors. Whether a more conservative surgical approach at greater risk of recurrence, or a more radical intervention with an increased risk of facial paralysis is warranted is still under discussion. Our study addresses the opportunity for improving surgical outcome by employing platelet-rich plasma (PRP) gel at the surgical site. Twenty consecutive patients undergoing superficial parotidectomy were randomized and assigned to two groups, one with and one without PRP gel. Many parameters were evaluated after surgery and during follow-up, such as the duration of hospitalization, facial nerve deficit, onset of Frey's syndrome, relapse, cosmetic results, presence of keloid or scar depressions, behavior of several facial muscles. Our explorative analysis suggests a positive effect of PRP on surgical outcome in patients undergoing parotidectomy, whereas no negative effects were detected. This work suggests that administration of PRP in patients undergoing parotidectomy is beneficial.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Xiong, Peng; Zhang, Jun; Wang, Xiaohui; Wu, Tat Leong; Hall, Brian J
2017-04-01
Standard precautions (SPs) are considered fundamental protective measures to manage health care-associated infections and to reduce occupational health hazards. This study intended to assess the effectiveness of a mixed media education intervention to enhance nursing students' knowledge, attitude, and compliance with SPs. A randomized controlled trial with 84 nursing students was conducted in a teaching hospital in Hubei, China. The intervention group (n = 42) attended 3 biweekly mixed media education sessions, consisting of lectures, videos, role-play, and feedback with 15-20 minutes of individual online supervision and feedback sessions following each class. The control group learned the same material through self-directed readings. Pre- and posttest assessments of knowledge, attitudes, and compliance were assessed with the Knowledge with Standard Precautions Questionnaire, Attitude with Standard Precautions Scale, and the Compliance with Standard Precautions Scale, respectively. The Standard Bacterial Colony Index was used to assess handwashing effectiveness. At 6-week follow-up, performance on the Knowledge with Standard Precautions Questionnaire, Attitude with Standard Precautions Scale, and Compliance with Standard Precautions Scale were significantly improved in the intervention group compared with the control group (P media education intervention is effective in improving knowledge, attitude, and compliance with SPs. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Dova, Maria Teresa; Ferrari, Sergio
2005-01-01
We present a method to investigate the CP quantum numbers of the Higgs boson in the process e + e - ->Zφ at a future e + e - linear collider (LC), where φ, a generic Higgs boson, is a mixture of CP-even and CP-odd states. The procedure consists of a comparison of the data with predictions obtained from Monte Carlo simulations corresponding to the productions of scalar and pseudoscalar Higgs and the interference term which constitutes a distinctive signal of CP violation. We present estimates of the sensitivity of the method from Monte Carlo studies using hypothetical data samples with a full LC detector simulation taking into account the background signals
Roseen, Eric J; Cornelio-Flores, Oscar; Lemaster, Chelsey; Hernandez, Maria; Fong, Calvin; Resnick, Kirsten; Wardle, Jon; Hanser, Suzanne; Saper, Robert
2017-01-01
Little is known about the feasibility of providing massage or music therapy to medical inpatients at urban safety-net hospitals or the impact these treatments may have on patient experience. To determine the feasibility of providing massage and music therapy to medical inpatients and to assess the impact of these interventions on patient experience. Single-center 3-arm feasibility randomized controlled trial. Urban academic safety-net hospital. Adult inpatients on the Family Medicine ward. Massage therapy consisted of a standardized protocol adapted from a previous perioperative study. Music therapy involved a preference assessment, personalized compact disc, music-facilitated coping, singing/playing music, and/or songwriting. Credentialed therapists provided the interventions. Patient experience was measured with the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) within 7 days of discharge. We compared the proportion of patients in each study arm reporting "top box" scores for the following a priori HCAHPS domains: pain management, recommendation of hospital, and overall hospital rating. Responses to additional open-ended postdischarge questions were transcribed, coded independently, and analyzed for common themes. From July to December 2014, 90 medical inpatients were enrolled; postdischarge data were collected on 68 (76%) medical inpatients. Participants were 70% females, 43% non-Hispanic black, and 23% Hispanic. No differences between groups were observed on HCAHPS. The qualitative analysis found that massage and music therapy were associated with improved overall hospital experience, pain management, and connectedness to the massage or music therapist. Providing music and massage therapy in an urban safety-net inpatient setting was feasible. There was no quantitative impact on HCAHPS. Qualitative findings suggest benefits related to an improved hospital experience, pain management, and connectedness to the massage or music therapist.
Cornelio-Flores, Oscar; Lemaster, Chelsey; Hernandez, Maria; Fong, Calvin; Resnick, Kirsten; Wardle, Jon; Hanser, Suzanne; Saper, Robert
2017-01-01
Background Little is known about the feasibility of providing massage or music therapy to medical inpatients at urban safety-net hospitals or the impact these treatments may have on patient experience. Objective To determine the feasibility of providing massage and music therapy to medical inpatients and to assess the impact of these interventions on patient experience. Design Single-center 3-arm feasibility randomized controlled trial. Setting Urban academic safety-net hospital. Patients Adult inpatients on the Family Medicine ward. Interventions Massage therapy consisted of a standardized protocol adapted from a previous perioperative study. Music therapy involved a preference assessment, personalized compact disc, music-facilitated coping, singing/playing music, and/or songwriting. Credentialed therapists provided the interventions. Measurements Patient experience was measured with the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) within 7 days of discharge. We compared the proportion of patients in each study arm reporting “top box” scores for the following a priori HCAHPS domains: pain management, recommendation of hospital, and overall hospital rating. Responses to additional open-ended postdischarge questions were transcribed, coded independently, and analyzed for common themes. Results From July to December 2014, 90 medical inpatients were enrolled; postdischarge data were collected on 68 (76%) medical inpatients. Participants were 70% females, 43% non-Hispanic black, and 23% Hispanic. No differences between groups were observed on HCAHPS. The qualitative analysis found that massage and music therapy were associated with improved overall hospital experience, pain management, and connectedness to the massage or music therapist. Conclusions Providing music and massage therapy in an urban safety-net inpatient setting was feasible. There was no quantitative impact on HCAHPS. Qualitative findings suggest benefits related to an
International Nuclear Information System (INIS)
Bull, D.L.
1986-01-01
Studies were made of the comparative in vitro metabolism of ( 14 C)xanthotoxin and( 14 C)aldrin by homogenate preparations of midguts and bodies (carcass minus digestive tract and head) of last-stage larvae of the black swallowtail butterfly (Papilio polyxenes Fabr.) and the fall armyworm (Spodoptera frugiperda (J. E. Smith)). The two substrates were metabolized by 10,000g supernatant microsomal preparations from both species. Evidence gained through the use of a specific inhibitor and cofactor indicated that mixed-function microsomal oxidases were major factors in the metabolism and that the specific activity of this enzyme system was considerably higher in midgut preparations from P. polyxenes than in similar preparations from S. frugiperda. Aldrin was metabolized 3-4 times faster by P. polyxenes, and xanthotoxin 6-6.5 times faster
Directory of Open Access Journals (Sweden)
Evans Roni L
2010-03-01
, using a semi-structured format, are conducted with patients at the end of the 12-week treatment period and also with providers at the end of the trial. Discussion This mixed-methods randomized clinical trial assesses clinical effectiveness, cost-effectiveness, and patients' and providers' perceptions of care, in treating non-acute LBP through evidence-based individualized care delivered by monodisciplinary or multidisciplinary care teams. Trial registration ClinicalTrials.gov NCT00567333
Topics in computational linear optimization
DEFF Research Database (Denmark)
Hultberg, Tim Helge
2000-01-01
Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...
de Souza, Raphael F; Bedos, Christophe; Esfandiari, Shahrokh; Makhoul, Nicholas M; Dagdeviren, Didem; Abi Nader, Samer; Jabbar, Areej A; Feine, Jocelyne S
2018-04-23
Overdentures retained by a single implant in the midline have arisen as a minimal implant treatment for edentulous mandibles. The success of this treatment depends on the performance of a single stud attachment that is susceptible to wear-related retention loss. Recently developed biomaterials used in attachments may result in better performance of the overdentures, offering minimal retention loss and greater patient satisfaction. These biomaterials include resistant polymeric matrixes and amorphous diamond-like carbon applied on metallic components. The objective of this explanatory mixed-methods study is to compare Novaloc, a novel attachment system with such characteristics, to a traditional alternative for single implants in the mandible of edentate elderly patients. We will carry out a randomized cross-over clinical trial comparing Novaloc attachments to Locators for single-implant mandibular overdentures in edentate elderly individuals. Participants will be followed for three months with each attachment type; patient-based, clinical, and economic outcomes will be gathered. A sample of 26 participants is estimated to be required to detect clinically relevant differences in terms of the primary outcome (patient ratings of general satisfaction). Participants will choose which attachment they wish to keep, then be interviewed about their experiences and preferences with a single implant prosthesis and with the two attachments. Data from the quantitative and qualitative assessments will be integrated through a mixed-methods explanatory strategy. A last quantitative assessment will take place after 12 months with the preferred attachment; this latter assessment will enable measurement of the attachments' long-term wear and maintenance requirements. Our results will lead to evidence-based recommendations regarding these systems, guiding providers and patients when making decisions on which attachment systems and implant numbers will be most appropriate for
DEFF Research Database (Denmark)
Bravo-Oviedo, Andres; Pretzsch, Hans; Ammer, Christian
2014-01-01
Aim of study: We aim at (i) developing a reference definition of mixed forests in order to harmonize comparative research in mixed forests and (ii) review the research perspectives in mixed forests. Area of study: The definition is developed in Europe but can be tested worldwide. Material...... and Methods: Review of existent definitions of mixed forests based and literature review encompassing dynamics, management and economic valuation of mixed forests. Main results: A mixed forest is defined as a forest unit, excluding linear formations, where at least two tree species coexist at any...... density in mixed forests, (iii) conversion of monocultures to mixed-species forest and (iv) economic valuation of ecosystem services provided by mixed forests. Research highlights: The definition is considered a high-level one which encompasses previous attempts to define mixed forests. Current fields...
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
International Nuclear Information System (INIS)
Esmaeily, Ali; Ahmadi, Abdollah; Raeisi, Fatima; Ahmadi, Mohammad Reza; Esmaeel Nezhad, Ali; Janghorbani, Mohammadreza
2017-01-01
A new optimization framework based on MILP model is introduced in the paper for the problem of stochastic self-scheduling of hydrothermal units known as HTSS Problem implemented in a joint energy and reserve electricity market with day-ahead mechanism. The proposed MILP framework includes some practical constraints such as the cost due to valve-loading effect, the limit due to DRR and also multi-POZs, which have been less investigated in electricity market models. For the sake of more accuracy, for hydro generating units’ model, multi performance curves are also used. The problem proposed in this paper is formulated using a model on the basis of a stochastic optimization technique while the objective function is maximizing the expected profit utilizing MILP technique. The suggested stochastic self-scheduling model employs the price forecast error in order to take into account the uncertainty due to price. Besides, LMCS is combined with roulette wheel mechanism so that the scenarios corresponding to the non-spinning reserve price and spinning reserve price as well as the energy price at each hour of the scheduling are generated. Finally, the IEEE 118-bus power system is used to indicate the performance and the efficiency of the suggested technique. - Highlights: • Characterizing the uncertainties of price and FOR of units. • Replacing the fixed ramping rate constraints with the dynamic ones. • Proposing linearized model for the valve-point effects of thermal units. • Taking into consideration the multi-POZs relating to the thermal units. • Taking into consideration the multi-performance curves of hydroelectric units.
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Ozturk, I.; Ottosen, C.O.; Ritz, Christian
2011-01-01
conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 µmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi......Photosynthetic response to light was measured on the leaves of two cultivars of Rosa hybrida L. (Escimo and Mercedes) in the greenhouse to obtain light-response curves and their parameters. Th e aim was to use a model to simulate leaf photosynthetic carbon gain with respect to environmental...
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Gao, Yan; Luquez, Cecilia; Lynggaard, Helle; Andersen, Henning; Saboo, Banshi
2014-12-01
The study aimed to confirm the efficacy, through non-inferiority, of patient-driven versus investigator-driven titration of biphasic insulin aspart 30 (BIAsp 30) in terms of glycemic control assessed by HbA1c change. SimpleMix was a 20 week, open-label, randomized, two-armed, parallel-group, multicenter study in five countries (Argentina, China, India, Poland, and the UK). Patients with type 2 diabetes were randomized into either patient-driven or investigator-driven BIAsp 30 titration groups. Non-inferiority of patient-driven vs. investigator-driven titration based on change in HbA1c from baseline to week 20 could not be demonstrated. Mean (SE) estimated change from baseline to week 20 was -0.72 (0.08)% in the patient-driven group and -0.97 (0.08)% in the investigator-driven group; estimated difference 0.25% (95% CI: 0.04; 0.46). Estimated mean change (SE) in fasting plasma glucose from baseline to week 20 was similar between groups: -0.94 (0.21) mmol/L for patient-driven and -1.07 (0.22) mmol/L for investigator-driven (difference non-significant). Both treatment arms were well tolerated, and hypoglycemic episode rates were similar between groups, with a rate ratio of 0.77 (95% CI: 0.54; 1.09; p = 0.143) for all hypoglycemic episodes and 0.78 (95% CI: 0.42; 1.43; p = 0.417) for nocturnal hypoglycemic episodes. Non-inferiority of patient-driven versus investigator-driven titration with regard to change from baseline to end-of-treatment HbA1c could not be confirmed. It is possible that a clinic visit 12 weeks after intensification of treatment with BIAsp 30 in patients with type 2 diabetes inadequately treated with basal insulin may benefit patient-driven titration of BIAsp 30. A limitation of the study was the relatively small number of patients recruited in each country, which does not allow country-specific analyses to be performed. Overall, treatment with BIAsp 30 was well tolerated in both treatment groups.
International Nuclear Information System (INIS)
Well, R.; Langel, R.; Reineking, A.
2002-01-01
The variation in the natural abundance of 15 N in atmospheric gas species is often used to determine the mixing of trace gases from different sources. With conventional budget calculations one unknown quantity can be determined if the remaining quantities are known. From 15 N tracer studies in soils with highly enriched 15 N-nitrate a procedure is known to calculate the mixing of atmospheric and soil derived N 2 based on the measurement of the 30/28 and 29/28 ratios in gas samples collected from soil covers. Because of the non-random distribution of the mole masses 30 N 2 , 29 N 2 and 28 N 2 in the mixing gas it is possible to calculate two quantities simultaneously, i.e. the mixing ratio of atmospheric and soil derived N 2 , and the isotopic signature of the soil derived N 2 . Routine standard measurements of laboratory air had suggested a non-random distribution of N 2 -mole masses. The objective of this study was to investigate and explain the existence of non-random distributions of 15 N 15 N, 14 N 15 N and 14 N 14 N in N 2 and N 2 O in environmental samples. The calculation of theoretical isotope data resulting from hypothetical mixing of two sources differing in 15 N natural abundance demonstrated, that the deviation from an ideal random distribution of mole masses is not detectable with the current precision of mass spectrometry. 15 N-analysis of N 2 or N 2 O was conducted with randomised and non-randomised replicate samples of different origin. 15 N abundance as calculated from 29/28 ratios were generally higher in randomised samples. The differences between the treatments ranged between 0.05 and 0.17 δper mille 15 N. It was concluded that the observed randomisation effect is probably caused by 15 N 15 N fractionation during environmental processes. (author)
Directory of Open Access Journals (Sweden)
Santana Isabel
2011-08-01
Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.
Aarons, Gregory A; Ehrhart, Mark G; Farahnak, Lauren R; Hurlburt, Michael S
2015-01-16
Leadership is important in the implementation of innovation in business, health, and allied health care settings. Yet there is a need for empirically validated organizational interventions for coordinated leadership and organizational development strategies to facilitate effective evidence-based practice (EBP) implementation. This paper describes the initial feasibility, acceptability, and perceived utility of the Leadership and Organizational Change for Implementation (LOCI) intervention. A transdisciplinary team of investigators and community stakeholders worked together to develop and test a leadership and organizational strategy to promote effective leadership for implementing EBPs. Participants were 12 mental health service team leaders and their staff (n = 100) from three different agencies that provide mental health services to children and families in California, USA. Supervisors were randomly assigned to the 6-month LOCI intervention or to a two-session leadership webinar control condition provided by a well-known leadership training organization. We utilized mixed methods with quantitative surveys and qualitative data collected via surveys and a focus group with LOCI trainees. Quantitative and qualitative analyses support the LOCI training and organizational strategy intervention in regard to feasibility, acceptability, and perceived utility, as well as impact on leader and supervisee-rated outcomes. The LOCI leadership and organizational change for implementation intervention is a feasible and acceptable strategy that has utility to improve staff-rated leadership for EBP implementation. Further studies are needed to conduct rigorous tests of the proximal and distal impacts of LOCI on leader behaviors, implementation leadership, organizational context, and implementation outcomes. The results of this study suggest that LOCI may be a viable strategy to support organizations in preparing for the implementation and sustainment of EBP.
Using generalized linear (mixed) models in HCI
Kaptein, M.C.; Robertson, J; Kaptein, M
2016-01-01
In HCI we often encounter dependent variables which are not (conditionally) normally distributed: we measure response-times, mouse-clicks, or the number of dialog steps it took a user to complete a task. Furthermore, we often encounter nested or grouped data; users are grouped within companies or
Linear Mixed Models in Statistical Genetics
R. de Vlaming (Ronald)
2017-01-01
markdownabstractOne of the goals of statistical genetics is to elucidate the genetic architecture of phenotypes (i.e., observable individual characteristics) that are affected by many genetic variants (e.g., single-nucleotide polymorphisms; SNPs). A particular aim is to identify specific SNPs that
International Nuclear Information System (INIS)
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
International Nuclear Information System (INIS)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Naclerio, Fernando; Seijo-Bujia, Marco; Larumbe-Zabala, Eneko; Earnest, Conrad P
2017-10-01
Beef powder is a new high-quality protein source scarcely researched relative to exercise performance. The present study examined the impact of ingesting hydrolyzed beef protein, whey protein, and carbohydrate on strength performance (1RM), body composition (via plethysmography), limb circumferences and muscular thickness (via ultrasonography), following an 8-week resistance-training program. After being randomly assigned to one of the following groups: Beef, Whey, or Carbohydrate, twenty four recreationally physically active males (n = 8 per treatment) ingested 20 g of supplement, mixed with orange juice, once a day (immediately after workout or before breakfast). Post intervention changes were examined as percent change and 95% CIs. Beef (2.0%, CI, 0.2-2.38%) and Whey (1.4%, CI, 0.2-2.6%) but not Carbohydrate (0.0%, CI, -1.2-1.2%) increased fat-free mass. All groups increased vastus medialis thickness: Beef (11.1%, CI, 6.3-15.9%), Whey (12.1%, CI, 4.0, -20.2%), Carbohydrate (6.3%, CI, 1.9-10.6%). Beef (11.2%, CI, 5.9-16.5%) and Carbohydrate (4.5%, CI, 1.6-7.4%), but not Whey (1.1%, CI, -1.7-4.0%), increased biceps brachialis thickness, while only Beef increased arm (4.8%, CI, 2.3-7.3%) and thigh (11.2%, 95%CI 0.4-5.9%) circumferences. Although the three groups significantly improved 1RM Squat (Beef 21.6%, CI 5.5-37.7%; Whey 14.6%, CI, 5.9-23.3%; Carbohydrate 19.6%, CI, 2.2-37.1%), for the 1RM bench press the improvements were significant for Beef (15.8% CI 7.0-24.7%) and Whey (5.8%, CI, 1.7-9.8%) but not for carbohydrate (11.4%, CI, -0.9-23.6%). Protein-carbohydrate supplementation supports fat-free mass accretion and lower body hypertrophy. Hydrolyzed beef promotes upper body hypertrophy along with similar performance outcomes as observed when supplementing with whey isolate or maltodextrin.
Directory of Open Access Journals (Sweden)
Tourangeau Ann
2008-12-01
Full Text Available Abstract Background Foot ulcers are a significant problem for people with diabetes. Comprehensive assessments of risk factors associated with diabetic foot ulcer are recommended in clinical guidelines to decrease complications such as prolonged healing, gangrene and amputations, and to promote effective management. However, the translation of clinical guidelines into nursing practice remains fragmented and inconsistent, and a recent homecare chart audit showed less than half the recommended risk factors for diabetic foot ulcers were assessed, and peripheral neuropathy (the most significant predictor of complications was not assessed at all. Strong leadership is consistently described as significant to successfully transfer guidelines into practice. Limited research exists however regarding which leadership behaviours facilitate and support implementation in nursing. The purpose of this pilot study is to evaluate the impact of a leadership intervention in community nursing on implementing recommendations from a clinical guideline on the nursing assessment and management of diabetic foot ulcers. Methods Two phase mixed methods design is proposed (ISRCTN 12345678. Phase I: Descriptive qualitative to understand barriers to implementing the guideline recommendations, and to inform the intervention. Phase II: Matched pair cluster randomized controlled trial (n = 4 centers will evaluate differences in outcomes between two implementation strategies. Primary outcome: Nursing assessments of client risk factors, a composite score of 8 items based on Diabetes/Foot Ulcer guideline recommendations. Intervention: In addition to the organization's 'usual' implementation strategy, a 12 week leadership strategy will be offered to managerial and clinical leaders consisting of: a printed materials, b one day interactive workshop to develop a leadership action plan tailored to barriers to support implementation; c three post-workshop teleconferences. Discussion This
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
The transition model test for serial dependence in mixed-effects models for binary data
DEFF Research Database (Denmark)
Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders
2017-01-01
Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye
2017-06-01
Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.
DEFF Research Database (Denmark)
Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune
This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...
DEFF Research Database (Denmark)
Asmussen, J. C.; Ibrahim, R.; Brincker, Rune
1998-01-01
This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Mixing of solids in different mixing devices
Indian Academy of Sciences (India)
INGRID BAUMAN, DUŠKA ´CURI ´C and MATIJA BOBAN ... whose main cause is the difference in particle size, density shape and resilience. ..... Gyebis J, Katai F 1990 Determination and randomness in mixing of particulate solids, Chem.
Test Pattern Generator for Mixed Mode BIST
Energy Technology Data Exchange (ETDEWEB)
Kim, Hong Sik; Lee, Hang Kyu; Kang, Sung Ho [Yonsei University (Korea, Republic of)
1998-07-01
As the increasing integrity of VLSI, the BIST (Built-In Self Test) is used as an effective method to test chips. Generally the pseudo-random test pattern generation is used for BIST. But it requires lots of test patterns when there exist random resistant faults. Therefore deterministic testing is an interesting BIST technique due to the minimal number of test patterns and to its high fault coverage. However this is not applicable since the existing deterministic test pattern generators require too much area overhead despite their efficiency. Therefore we propose a mixed test scheme which applies to the circuit under test, a deterministic test sequence followed by a pseudo-random one. This scheme allows the maximum fault coverage detection to be achieved, furthermore the silicon area overhead of the mixed hardware generator can be reduced. The deterministic test generator is made with a finite state machine and a pseudo-random test generator is made with LFSR(linear feedback shift register). The results of ISCAS circuits show that the maximum fault coverage is guaranteed with small number of test set and little hardware overhead. (author). 15 refs., 10 figs., 4 tabs.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Directory of Open Access Journals (Sweden)
Sílvia Maria Santana Mapa
2012-01-01
Full Text Available O objetivo do trabalho é avaliar a qualidade das soluções para o problema de localização-alocação de instalações geradas por um SIG-T (Sistema de Informação Geográfica para Transportes, obtidas após a utilização combinada das rotinas Localização de Facilidades e Problema do Transporte, quando comparadas com as soluções ótimas obtidas a partir de modelo matemático exato baseado em Programação Linear Inteira Mista (PLIM, desenvolvido externamente ao SIG. Os modelos foram aplicados a três simulações: a primeira propõe a abertura de fábricas e alocação de clientes no Estado de São Paulo; a segunda envolve um atacadista e um estudo de localização de centros de distribuição e alocação dos clientes varejistas; a terceira localiza creches em um contexto urbano, alocando a demanda. Os resultados mostraram que, quando se considera a capacidade das instalações, o modelo otimizante PLIM chegou a apresentar, em um dos cenários simulados, resultados até 37% melhores do que o SIG, além de propor locais diferentes para abertura de novas instalações. Quando não se considera a capacidade, o modelo SIG se mostrou tão eficiente quanto o modelo exato PLIM, chegando exatamente às mesmas soluções.This study aims to evaluate the quality of the solutions for facility location-allocation problems generated by a GIS-T (Geographic Information System for Transportation software. These solutions were obtained from combining the Facility Location and Transportation Problem routines, when compared with the optimal solutions, which were obtained using the exact mathematical model based on the Mixed Integer Linear Programming (MILP developed externally to the GIS. The models were applied to three simulations: the first one proposes set up businesses and customers' allocation in the state of São Paulo; the second involves a wholesaler and an investigation of distribution center location and retailers' allocation; and the third one
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Chen, Wei; Qian, Lei; Watada, Hirotaka; Li, Peng Fei; Iwamoto, Noriyuki; Imori, Makoto; Yang, Wen Ying
2017-01-01
The pathophysiology of diabetes differs between Asian and Western patients in many ways, and diet is a primary contributor. The present study examined the effect of diet on the efficacy of 25% insulin lispro/75% insulin lispro protamine suspension (LM25) and 50% insulin lispro/50% insulin lispro protamine suspension (LM50) as starter insulin in Chinese and Japanese patients with type 2 diabetes and inadequate glycemic control with oral antidiabetic medication. This was a predefined subgroup analysis of a phase 4, open-label, 26-week, parallel-arm, randomized (computer-generated random sequence) trial (21 January 2013 to 22 August 2014). Nutritional intake was assessed from food records kept by participants before study drug administration. Outcomes assessed were changes from baseline in self-monitored blood glucose, 1,5-anhydroglucitol and glycated hemoglobin. In total, 328 participants were randomized to receive twice-daily LM25 (n = 168) or LM50 (n = 160). Median daily nutritional intake (by weight and percentage of total energy) was 230.8 g of carbohydrate (54%), 56.5 g of fat (31%) and 66 g of protein (15%). Improvements in self-monitored blood glucose were significantly greater (P ≤ 0.028) in the LM50 group than in the LM25 group, regardless of nutritional intake. When carbohydrate (by weight or percentage energy) or fat (by weight) intake exceeded median levels, LM50 was significantly more efficacious than LM25 (P ≤ 0.026) in improving 1,5-anhydroglucitol and glycated hemoglobin. Glycemic control improved in both LM25 and LM50 groups, but LM50 was significantly more efficacious under certain dietary conditions, particularly with increased carbohydrate intake. © 2016 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
Johnson, Miriam J; Booth, Sara; Currow, David C; Lam, Lawrence T; Phillips, Jane L
2016-05-01
The handheld fan is an inexpensive and safe way to provide facial airflow, which may reduce the sensation of chronic refractory breathlessness, a frequently encountered symptom. To test the feasibility of developing an adequately powered, multicenter, multinational randomized controlled trial comparing the efficacy of a handheld fan and exercise advice with advice alone in increasing activity in people with chronic refractory breathlessness from a variety of medical conditions, measuring recruitment rates; data quality; and potential primary outcome measures. This was a Phase II, multisite, international, parallel, nonblinded, mixed-methods randomized controlled trial. Participants were centrally randomized to fan or control. All received breathlessness self-management/exercise advice and were followed up weekly for four weeks. Participants/carers were invited to participate in a semistructured interview at the study's conclusion. Ninety-seven people were screened, 49 randomized (mean age 68 years; 49% men), and 43 completed the study. Site recruitment varied from 0.25 to 3.3/month and screening:randomization from 1.1:1 to 8.5:1. There were few missing data except for the Chronic Obstructive Pulmonary Disease Self-Efficacy Scale (two-thirds of data missing). No harms were observed. Three interview themes included 1) a fan is a helpful self-management strategy, 2) a fan aids recovery, and 3) a symptom control trial was welcome. A definitive, multisite trial to study the use of the handheld fan as part of self-management of chronic refractory breathlessness is feasible. Participants found the fan useful. However, the value of information for changing practice or policy is unlikely to justify the expense of such a trial, given perceived benefits, the minimal costs, and an absence of harms demonstrated in this study. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Mixed models for predictive modeling in actuarial science
Antonio, K.; Zhang, Y.
2012-01-01
We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
King, David G; Walker, Mark; Campbell, Matthew D; Breen, Leigh; Stevenson, Emma J; West, Daniel J
2018-04-01
Large doses of whey protein consumed as a preload before single high-glycemic load meals has been shown to improve postprandial glycemia in type 2 diabetes. It is unclear if this effect remains with smaller doses of whey co-ingested at consecutive mixed-macronutrient meals. Moreover, whether hydrolyzed whey offers further benefit under these conditions is unclear. The aim of this study was to investigate postprandial glycemic and appetite responses after small doses of intact and hydrolyzed whey protein co-ingested with mixed-nutrient breakfast and lunch meals in men with type 2 diabetes. In a randomized, single-blind crossover design, 11 men with type 2 diabetes [mean ± SD age: 54.9 ± 2.3 y; glycated hemoglobin: 6.8% ± 0.3% (51.3 ± 3.4 mmol/mol)] attended the laboratory on 3 mornings and consumed 1) intact whey protein (15 g), 2) hydrolyzed whey protein (15 g), or 3) placebo (control) immediately before mixed-macronutrient breakfast and lunch meals, separated by 3 h. Blood samples were collected periodically and were processed for insulin, intact glucagon-like peptide 1 (GLP-1), gastric inhibitory polypeptide (GIP), leptin, peptide tyrosine tyrosine (PYY3-36), and amino acid concentrations. Interstitial glucose was measured during and for 24 h after each trial. Subjective appetite was assessed with the use of visual analog scales. Total postprandial glycemia area under the curve was reduced by 13% ± 3% after breakfast following the intact whey protein when compared with control (P 0.05). The consumption of a small 15-g dose of intact whey protein immediately before consecutive mixed-macronutrient meals improves postprandial glycemia, stimulates insulin release, and increases satiety in men with type 2 diabetes. This trial was registered at www.clinicialtrials.gov as NCT02903199.
Directory of Open Access Journals (Sweden)
Maria Arlene Fausto
2008-03-01
Full Text Available Os dados provenientes de estudos longitudinais se caracterizam pela seqüência de duas ou mais observações em cada indivíduo. Nos estudos de coorte, esses dados geralmente apresentam estrutura desbalanceada. Uma casuística que envolve a avaliação longitudinal de crescimento de lactentes nascidos de mães infectadas pelo HIV foi acompanhada no ambulatório de AIDS pediátrica do Hospital das Clínicas da Universidade Federal de Minas Gerais, Minas Gerais, Brasil. O objetivo deste estudo é demonstrar a aplicação do modelo linear misto na análise de dados longitudinais desbalanceados provenientes dessa coorte. Os resultados mostram que, aos seis meses de idade, os meninos eram, em média, 1,8cm maiores que as meninas e as crianças sororrevertoras eram, em média, 2,9cm maiores que as infectadas. Aos 12 meses, a diferença na altura entre meninos e meninas passou a ser, em média, de 2,4cm enquanto a diferença entre infectados e sororrevertores passou a ser, em média, de 3,5cm. Além de descrever o comportamento longitudinal do crescimento, o modelo também permite estimar a velocidade de crescimento das crianças por sexo e grupo.A longitudinal data set is characterized by a time sequence of two or more observations from each individual. In cohort studies, these data are usually not balanced. A data set related to longitudinal height measurements in children of HIV-infected mothers was recorded at the university hospital of the Federal University in Minas Gerais, Brazil. The objective was to assess the application of the mixed effect model to this unbalanced data set. At six months of age, on average boys were 1.8cm taller than girls, and seroreverter infants were 2.9cm taller than their HIV+ peers. At 12 months of age, on average boys were 2.4cm taller than girls and seroreverter children were 3.5cm taller than HIV+ ones. In addition to describing longitudinal height behavior, this model also includes the growth rate estimation for
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
DEFF Research Database (Denmark)
Swantes, Melody
2011-01-01
In the United States, the agricultural industry is dependent on men and women from Mexico who migrate throughout the country to participate in the care and harvest of crops. They often migrate independently of their families and leave loved ones behind. Separation from families and difficult...... are not able to meet the needs in culturally sensitive ways presented by this population. The purpose of this study was to examine the effects of music therapy on Mexican farmworkers’ levels of depression, anxiety, and social isolation. In addition, this study sought to examine how the migrant farmworkers used...... music-making sessions between music therapy sessions as a coping skill to further improve their overall mental health. Finally, this study sought to examine how migrant farmworkers engaged in the research process and how they valued their relationship with the researcher. This study utilized a mixed...
International Nuclear Information System (INIS)
Grossman, Y.
1997-10-01
In supersymmetric models with nonvanishing Majorana neutrino masses, the sneutrino and antisneutrino mix. The conditions under which this mixing is experimentally observable are studied, and mass-splitting of the sneutrino mass eigenstates and sneutrino oscillation phenomena are analyzed
Directory of Open Access Journals (Sweden)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Uniqueness theorems in linear elasticity
Knops, Robin John
1971-01-01
The classical result for uniqueness in elasticity theory is due to Kirchhoff. It states that the standard mixed boundary value problem for a homogeneous isotropic linear elastic material in equilibrium and occupying a bounded three-dimensional region of space possesses at most one solution in the classical sense, provided the Lame and shear moduli, A and J1 respectively, obey the inequalities (3 A + 2 J1) > 0 and J1>O. In linear elastodynamics the analogous result, due to Neumann, is that the initial-mixed boundary value problem possesses at most one solution provided the elastic moduli satisfy the same set of inequalities as in Kirchhoffs theorem. Most standard textbooks on the linear theory of elasticity mention only these two classical criteria for uniqueness and neglect altogether the abundant literature which has appeared since the original publications of Kirchhoff. To remedy this deficiency it seems appropriate to attempt a coherent description ofthe various contributions made to the study of uniquenes...
Swindle, Taren; Johnson, Susan L; Whiteside-Mansell, Leanne; Curran, Geoffrey M
2017-07-18
Despite the potential to reach at-risk children in childcare, there is a significant gap between current practices and evidence-based obesity prevention in this setting. There are few investigations of the impact of implementation strategies on the uptake of evidence-based practices (EBPs) for obesity prevention and nutrition promotion. This study protocol describes a three-phase approach to developing and testing implementation strategies to support uptake of EBPs for obesity prevention practices in childcare (i.e., key components of the WISE intervention). Informed by the i-PARIHS framework, we will use a stakeholder-driven evidence-based quality improvement (EBQI) process to apply information gathered in qualitative interviews on barriers and facilitators to practice to inform the design of implementation strategies. Then, a Hybrid Type III cluster randomized trial will compare a basic implementation strategy (i.e., intervention as usual) with an enhanced implementation strategy informed by stakeholders. All Head Start centers (N = 12) within one agency in an urban area in a southern state in the USA will be randomized to receive the basic or enhanced implementation with approximately 20 classrooms per group (40 educators, 400 children per group). The educators involved in the study, the data collectors, and the biostastician will be blinded to the study condition. The basic and enhanced implementation strategies will be compared on outcomes specified by the RE-AIM model (e.g., Reach to families, Effectiveness of impact on child diet and health indicators, Adoption commitment of agency, Implementation fidelity and acceptability, and Maintenance after 6 months). Principles of formative evaluation will be used throughout the hybrid trial. This study will test a stakeholder-driven approach to improve implementation, fidelity, and maintenance of EBPs for obesity prevention in childcare. Further, this study provides an example of a systematic process to develop
Wilkes, Scott; Pearce, Simon; Ryan, Vicky; Rapley, Tim; Ingoe, Lorna; Razvi, Salman
2013-03-22
The population of the UK is ageing. There is compelling evidence that thyroid stimulating hormone distribution levels increase with age. Currently, in UK clinical practice elderly hypothyroid patients are treated with levothyroxine to lower their thyroid stimulating hormone levels to a standard non-age-related range. Evidence suggests that mortality is negatively associated with thyroid stimulating hormone levels. We report the protocol of a feasibility study working towards a full-scale randomized controlled trial to test whether lower dose levothyroxine has beneficial cardiovascular outcomes in the oldest old. SORTED is a mixed methods study with three components: SORTED A: A feasibility study of a dual-center single-blinded randomized controlled trial of elderly hypothyroid patients currently treated with levothyroxine. Patients will be recruited from 20 general practices and two hospital trust endocrine units in Northumberland, Tyne and Wear. Target recruitment of 50 elderly hypothyroid patients currently treated with levothyroxine, identified in both primary and secondary care settings. Reduced dose of levothyroxine to achieve an elevated serum thyroid stimulating hormone (target range 4.1 to 8.0 mU/L) versus standard levothyroxine replacement (target range 0.4 to 4.0 mU/L). Using random permuted blocks, in a ratio of 1:1, randomization will be carried out by Newcastle Clinical Trials Unit. Study feasibility (recruitment and retention rates and medication compliance), acceptability of the trial design, assessment of mobility and falls risk, and change in cardiovascular risk factors. Qualitative study using in-depth interviews to understand patients' willingness to take part in a randomized controlled trial and participants' experience of the intervention. Retrospective cohort study of 400 treated hypothyroid patients aged 80 years or over registered in 2008 in primary care practices, studying their 4-year cardiovascular outcomes to inform the power of SORTED
A random number generator for continuous random variables
Guerra, V. M.; Tapia, R. A.; Thompson, J. R.
1972-01-01
A FORTRAN 4 routine is given which may be used to generate random observations of a continuous real valued random variable. Normal distribution of F(x), X, E(akimas), and E(linear) is presented in tabular form.
Directory of Open Access Journals (Sweden)
Angela O’Dea
2015-01-01
Full Text Available Objective. To evaluate a 12-week group-based lifestyle intervention programme for women with prediabetes following gestational diabetes (GDM. Design. A two-group, mixed methods randomized controlled trial in which 50 women with a history of GDM and abnormal glucose tolerance postpartum were randomly assigned to intervention (n=24 or wait control (n=26 and postintervention qualitative interviews with participants. Main Outcome Measures. Modifiable biochemical, anthropometric, behavioural, and psychosocial risk factors associated with the development of type 2 diabetes. The primary outcome variable was the change in fasting plasma glucose (FPG from study entry to one-year follow-up. Results. At one-year follow-up, the intervention group showed significant improvements over the wait control group on stress, diet self-efficacy, and quality of life. There was no evidence of an effect of the intervention on measures of biochemistry or anthropometry; the effect on one health behaviour, diet adherence, was close to significance. Conclusions. Prevention programmes must tackle the barriers to participation faced by this population; home-based interventions should be investigated. Strategies for promoting long-term health self-management need to be developed and tested.
O'Dea, Angela; Tierney, Marie; McGuire, Brian E; Newell, John; Glynn, Liam G; Gibson, Irene; Noctor, Eoin; Danyliv, Andrii; Connolly, Susan B; Dunne, Fidelma P
2015-01-01
To evaluate a 12-week group-based lifestyle intervention programme for women with prediabetes following gestational diabetes (GDM). A two-group, mixed methods randomized controlled trial in which 50 women with a history of GDM and abnormal glucose tolerance postpartum were randomly assigned to intervention (n = 24) or wait control (n = 26) and postintervention qualitative interviews with participants. Modifiable biochemical, anthropometric, behavioural, and psychosocial risk factors associated with the development of type 2 diabetes. The primary outcome variable was the change in fasting plasma glucose (FPG) from study entry to one-year follow-up. At one-year follow-up, the intervention group showed significant improvements over the wait control group on stress, diet self-efficacy, and quality of life. There was no evidence of an effect of the intervention on measures of biochemistry or anthropometry; the effect on one health behaviour, diet adherence, was close to significance. Prevention programmes must tackle the barriers to participation faced by this population; home-based interventions should be investigated. Strategies for promoting long-term health self-management need to be developed and tested.
Goodness-of-fit tests in mixed models
Claeskens, Gerda
2009-05-12
Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.
Swendeman, Dallas; Ramanathan, Nithya; Baetscher, Laura; Medich, Melissa; Scheffler, Aaron; Comulada, W Scott; Estrin, Deborah
2015-05-01
Self-monitoring by mobile phone applications offers new opportunities to engage patients in self-management. Self-monitoring has not been examined thoroughly as a self-directed intervention strategy for self-management of multiple behaviors and states by people living with HIV (PLH). PLH (n = 50), primarily African American and Latino, were recruited from 2 AIDS services organizations and randomly assigned to daily smartphone (n = 34) or biweekly Web-survey only (n = 16) self-monitoring for 6 weeks. Smartphone self-monitoring included responding to brief surveys on medication adherence, mental health, substance use, and sexual risk behaviors, and brief text diaries on stressful events. Qualitative analyses examine biweekly open-ended user-experience interviews regarding perceived benefits and barriers of self-monitoring, and to elaborate a theoretical model for potential efficacy of self-monitoring to support self-management for multiple domains. Self-monitoring functions include reflection for self-awareness, cues to action (reminders), reinforcements from self-tracking, and their potential effects on risk perceptions, motivations, skills, and behavioral activation states. Participants also reported therapeutic benefits related to self-expression for catharsis, nonjudgmental disclosure, and in-the-moment support. About one-third of participants reported that surveys were too long, frequent, or tedious. Some smartphone group participants suggested that daily self-monitoring was more beneficial than biweekly due to frequency and in-the-moment availability. About twice as many daily self-monitoring group participants reported increased awareness and behavior change support from self-monitoring compared with biweekly Web-survey only participants. Self-monitoring is a potentially efficacious disruptive innovation for supporting self-management by PLH and for complementing other interventions, but more research is needed to confirm efficacy, adoption, and sustainability.
Directory of Open Access Journals (Sweden)
Anne Nilsson
Full Text Available Berries and associated bioactive compounds, e.g. polyphenols and dietary fibre (DF, may have beneficial implications with respect to the metabolic syndrome, including also cognitive functions. The aim of this study was to evaluate effects on cognitive functions and cardiometabolic risk markers of 5 wk intervention with a mixture of berries, in healthy humans.Forty healthy subjects between 50-70 years old were provided a berry beverage based on a mixture of berries (150g blueberries, 50g blackcurrant, 50g elderberry, 50g lingonberries, 50g strawberry, and 100g tomatoes or a control beverage, daily during 5 weeks in a randomized crossover design. The control beverage (water based was matched with respect to monosaccharides, pH, and volume. Cognitive tests included tests of working memory capacity, selective attention, and psychomotor reaction time. Cardiometabolic test variables investigated were blood pressure, fasting blood concentrations of glucose, insulin, blood lipids, inflammatory markers, and markers of oxidative stress.The daily amounts of total polyphenols and DF from the berry beverage were 795 mg and 11g, respectively. There were no polyphenols or DF in the control beverage. The berry intervention reduced total- and LDL cholesterol compared to baseline (both P<0.05, and in comparison to the control beverage (P<0.005 and P<0.01, respectively. The control beverage increased glucose concentrations (P<0.01 and tended to increase insulin concentrations (P = 0.064 from base line, and increased insulin concentrations in comparison to the berry beverage (P<0.05. Subjects performed better in the working memory test after the berry beverage compared to after the control beverage (P<0.05. No significant effects on the other test variables were observed.The improvements in cardiometabolic risk markers and cognitive performance after the berry beverage suggest preventive potential of berries with respect to type 2 diabetes, cardiovascular disease
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...
Probabilistic Signal Recovery and Random Matrices
2016-12-08
that classical methods for linear regression (such as Lasso) are applicable for non- linear data. This surprising finding has already found several...we studied the complexity of convex sets. In numerical linear algebra , we analyzed the fastest known randomized approximation algorithm for...and perfect matchings In numerical linear algebra , we studied the fastest known randomized approximation algorithm for computing the permanents of
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
DEFF Research Database (Denmark)
Kandzia, Claudia; Kosonen, Risto; Melikov, Arsen Krikor
In this guidebook most of the known and used in practice methods for achieving mixing air distribution are discussed. Mixing ventilation has been applied to many different spaces providing fresh air and thermal comfort to the occupants. Today, a design engineer can choose from large selection...
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Nguyen, Phuong H; Gonzalez-Casanova, Ines; Young, Melissa F; Truong, Truong Viet; Hoang, Hue; Nguyen, Huong; Nguyen, Son; DiGirolamo, Ann M; Martorell, Reynaldo; Ramakrishnan, Usha
2017-08-01
Background: Maternal health and nutrition play a crucial role in early child growth and development. However, little is known about the benefits of preconception micronutrient interventions beyond the role of folic acid (FA) and neural tube defects. Objective: We evaluated the impact of weekly preconception multiple micronutrient (MM) or iron and folic acid (IFA) supplementation on child growth and development through the age of 2 y compared with FA alone. Methods: We followed 1599 offspring born to women who participated in a randomized controlled trial of preconception supplementation in Vietnam. Women received weekly supplements that contained either 2800 μg FA, 60 mg Fe and 2800 μg FA, or 15 MMs including IFA, from baseline until conception followed by daily prenatal IFA supplements until delivery. Child anthropometry was measured at birth and at 3, 6, 12, 18, and 24 mo. Child development was measured with the use of the Bayley Scales for Infant Development III at 24 mo. Results: The groups were similar for baseline maternal and offspring birth characteristics. At 24 mo of age, the offspring in the IFA group had significantly higher length-for-age z scores (LAZs) (0.14; 95% CI: 0.03, 0.26), reduced risk of being stunted (0.87; 95% CI: 0.76, 0.99), and smaller yearly decline in LAZs (0.10; 95% CI: 0.04, 0.15) than the offspring in the FA group. Similar trends were found for the offspring in the MM group compared with the FA group for LAZs (0.10; 95% CI: -0.02, 0.22) and the risk of being stunted (0.88; 95% CI: 0.77, 1.01). Offspring in the IFA group had improved motor development ( P = 0.03), especially fine motor development (0.41; 95% CI: 0.05, 0.77), at the age of 24 mo, but there were no differences for measures of cognition or language. Conclusions: Preconception supplementation with IFA improved linear growth and fine motor development at 2 y of age compared with FA. Future studies should examine whether these effects persist and improve child health and
Loohuis, Anne M M; Wessels, Nienke J; Jellema, Petra; Vermeulen, Karin M; Slieker-Ten Hove, Marijke C; van Gemert-Pijnen, Julia E W C; Berger, Marjolein Y; Dekker, Janny H; Blanker, Marco H
2018-02-02
We aim to assess whether a purpose-developed mobile application (app) is non-inferior regarding effectiveness and cost-effective when used to treat women with urinary incontinence (UI), as compared to care as usual in Dutch primary care. Additionally, we will explore the expectations and experiences of patients and care providers regarding app usage. A mixed-methods study will be performed, combining a pragmatic, randomized-controlled, non-inferiority trial with an extensive process evaluation. Women aged ≥18 years, suffering from UI ≥ 2 times per week and with access to a smartphone or tablet are eligible to participate. The primary outcome will be the change in UI symptom scores at 4 months after randomization, as assessed by the International Consultation on Incontinence Modular Questionnaire UI Short Form. Secondary outcomes will be the change in UI symptom scores at 12 months, as well as the patient-reported global impression of improvement, quality of life, change in sexual functioning, UI episodes per day, and costs at 4 and 12 months. In parallel, we will perform an extensive process evaluation to assess the expectations and experiences of patients and care providers regarding app usage, making use of interviews, focus group sessions, and log data analysis. This study will assess both the effectiveness and cost-effectiveness of app-based treatment for UI. The combination with the process evaluation, which will be performed in parallel, should also give valuable insights into the contextual factors that influence the effectiveness of such a treatment. © 2018 The Authors. Neurourology and Urodynamics Published by Wiley Periodicals, Inc.
Effects of mixing and stirring on the critical behaviour
International Nuclear Information System (INIS)
Antonov, N V; Hnatich, Michal; Honkonen, Juha
2006-01-01
Stochastic dynamics of a nonconserved scalar order parameter near its critical point, subject to random stirring and mixing, is studied using the field-theoretic renormalization group. The stirring and mixing are modelled by a random external Gaussian noise with the correlation function ∼δ(t - t')k 4-d-y and the divergence-free (due to incompressibility) velocity field, governed by the stochastic Navier-Stokes equation with a random Gaussian force with the correlation function ∝ δ(t-t')k 4-d-y' . Depending on the relations between the exponents y and y' and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow), but there are three new non-equilibrium regimes (universality classes) associated with new nontrivial fixed points of the renormalization group equations. The corresponding critical dimensions are calculated in the two-loop approximation (second order of the triple expansion in y, y' and ε = 4 - d)
Linearly constrained minimax optimization
DEFF Research Database (Denmark)
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
Camps, Stefan Gerardus; Kaur, Bhupinder; Quek, Rina Yu Chin; Henry, Christiani Jeyakumar
2017-07-12
The health benefits of consuming a low glycaemic index (GI) diet to reduce the risk of type 2 Diabetes are well recognized. In recent years the GI values of various foods have been determined. Their efficacy in constructing and consuming a low GI diet over 24 h in modulating glycaemic response has not been fully documented. The translation of using single-point GI values of foods to develop a 24 h mixed meal diet can provide valuable information to consumers, researchers and dietitians to optimize food choice for glycaemic control. By using GI values of foods to develop mixed meals, our study is the first to determine how both blood glucose and substrate oxidation may be modulated over 24 h. The study included 11 Asian men with a BMI between 17-24 kg/m 2 who followed both a 1-day low GI and 1-day high GI diet in a randomized, controlled cross-over design. Test meals included breakfast, lunch, snack and dinner. Glycaemic response was measured continuously for over 24 h and postprandial substrate oxidation for 10 h inside a whole body calorimeter. The low GI diet resulted in lower 24 h glucose iAUC (860 ± 440 vs 1329 ± 614 mmol/L.min; p = 0.014) with lower postprandial glucose iAUC after breakfast (p low GI vs high GI diet (1.44 ± 0.63 vs 2.33 ± 0.82 mmol/L; p fat oxidation was less during the low vs high GI diet (-0.033 ± 0.021 vs -0.050 ± 0.017 g/min; p low GI local foods to construct a 24 h low GI diet, is able to reduce glycaemic response and variability as recorded by continuous glucose monitoring. Our observations also confirm that a low GI diet promotes fat oxidation over carbohydrate oxidation when compared to a high GI diet. These observations provide public health support for the encouragement of healthier nutrition choices by consuming low GI foods. NCT 02631083 (Clinicaltrials.gov).
International Nuclear Information System (INIS)
Kazemi, Elahe; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh; Abbasi, Amir; Rashidian Vaziri, Mohammad Reza; Behjat, Abbas
2016-01-01
This study aims at developing a novel, sensitive, fast, simple and convenient method for separation and preconcentration of trace amounts of fluoxetine before its spectrophotometric determination. The method is based on combination of magnetic mixed hemimicelles solid phase extraction and dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene as a sorbent. The magnetic graphene was synthesized by a simple coprecipitation method and characterized by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The retained analyte was eluted using a 100 μL mixture of methanol/acetic acid (9:1) and converted into fluoxetine-β-cyclodextrin inclusion complex. The analyte was then quantified by fiber optic linear array spectrophotometry as well as mode-mismatched thermal lens spectroscopy (TLS). The factors affecting the separation, preconcentration and determination of fluoxetine were investigated and optimized. With a 50 mL sample and under optimized conditions using the spectrophotometry technique, the method exhibited a linear dynamic range of 0.4–60.0 μg L"−"1, a detection limit of 0.21 μg L"−"1, an enrichment factor of 167, and a relative standard deviation of 2.1% and 3.8% (n = 6) at 60 μg L"−"1 level of fluoxetine for intra- and inter-day analyses, respectively. However, with thermal lens spectrometry and a sample volume of 10 mL, the method exhibited a linear dynamic range of 0.05–300 μg L"−"1, a detection limit of 0.016 μg L"−"1 and a relative standard deviation of 3.8% and 5.6% (n = 6) at 60 μg L"−"1 level of fluoxetine for intra- and inter-day analyses, respectively. The method was successfully applied to determine fluoxetine in pharmaceutical formulation, human urine and environmental water samples. - Graphical abstract: A novel, sensitive, fast, simple and convenient mixed hemimicelles dispersive micro solid
Energy Technology Data Exchange (ETDEWEB)
Kazemi, Elahe; Haji Shabani, Ali Mohammad [Department of Chemistry, Yazd University, Safaieh, 89195-741, Yazd (Iran, Islamic Republic of); Dadfarnia, Shayessteh, E-mail: sdadfarnia@yazd.ac.ir [Department of Chemistry, Yazd University, Safaieh, 89195-741, Yazd (Iran, Islamic Republic of); Abbasi, Amir [Department of Physics, Yazd University, Safaieh, 89195-741, Yazd (Iran, Islamic Republic of); Rashidian Vaziri, Mohammad Reza [Laser and Optics Research School, 14155-1339, Tehran (Iran, Islamic Republic of); Behjat, Abbas [Department of Physics, Yazd University, Safaieh, 89195-741, Yazd (Iran, Islamic Republic of)
2016-01-28
This study aims at developing a novel, sensitive, fast, simple and convenient method for separation and preconcentration of trace amounts of fluoxetine before its spectrophotometric determination. The method is based on combination of magnetic mixed hemimicelles solid phase extraction and dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene as a sorbent. The magnetic graphene was synthesized by a simple coprecipitation method and characterized by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The retained analyte was eluted using a 100 μL mixture of methanol/acetic acid (9:1) and converted into fluoxetine-β-cyclodextrin inclusion complex. The analyte was then quantified by fiber optic linear array spectrophotometry as well as mode-mismatched thermal lens spectroscopy (TLS). The factors affecting the separation, preconcentration and determination of fluoxetine were investigated and optimized. With a 50 mL sample and under optimized conditions using the spectrophotometry technique, the method exhibited a linear dynamic range of 0.4–60.0 μg L{sup −1}, a detection limit of 0.21 μg L{sup −1}, an enrichment factor of 167, and a relative standard deviation of 2.1% and 3.8% (n = 6) at 60 μg L{sup −1} level of fluoxetine for intra- and inter-day analyses, respectively. However, with thermal lens spectrometry and a sample volume of 10 mL, the method exhibited a linear dynamic range of 0.05–300 μg L{sup −1}, a detection limit of 0.016 μg L{sup −1} and a relative standard deviation of 3.8% and 5.6% (n = 6) at 60 μg L{sup −1} level of fluoxetine for intra- and inter-day analyses, respectively. The method was successfully applied to determine fluoxetine in pharmaceutical formulation, human urine and environmental water samples. - Graphical abstract: A novel, sensitive, fast, simple and convenient mixed hemimicelles
DEFF Research Database (Denmark)
Bang Appel, Helene; Singla, Rashmi
2016-01-01
Despite an increase in cross border intimate relationships and children of mixed parentage, there is little mention or scholarship about them in the area of childhood and migrancy in the Nordic countries. The international literature implies historical pathologisation, contestation and current...... of identity formation in the . They position themselves as having an “in-between” identity or “ just Danes” in their every day lives among friends, family, and during leisure activities. Thus a new paradigm is evolving away- from the pathologisation of mixed children, simplified one-sided categories...
Random Numbers and Quantum Computers
McCartney, Mark; Glass, David
2002-01-01
The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…
Majer, Istvan M; Gaughran, Fiona; Sapin, Christophe; Beillat, Maud; Treur, Maarten
2015-01-01
Treatment with long-acting injectable (LAI) antipsychotic medication is an important element of relapse prevention in schizophrenia. Recently, the intramuscular once-monthly formulation of aripiprazole received marketing approval in Europe and the United States for schizophrenia. This study aimed to compare aripiprazole once-monthly with other LAI antipsychotics in terms of efficacy, tolerability, and safety. A systematic literature review was conducted to identify relevant double-blind randomized clinical trials of LAIs conducted in the maintenance treatment of schizophrenia. MEDLINE, MEDLINE In-Process, Embase, the Cochrane Library, PsycINFO, conference proceedings, clinical trial registries, and the reference lists of key review articles were searched. The literature search covered studies dating from January 2002 to May 2013. Studies were required to have ≥24 weeks of follow-up. Patients had to be stable at randomization. Studies were not eligible for inclusion if efficacy of acute and maintenance phase treatment was not reported separately. Six trials were identified (0.5% of initially identified studies), allowing comparisons of aripiprazole once-monthly, risperidone LAI, paliperidone palmitate, olanzapine pamoate, haloperidol depot, and placebo. Data extracted included study details, study duration, the total number of patients in each treatment arm, efficacy, tolerability, and safety outcomes. The efficacy outcome contained the number of patients that experienced a relapse, tolerability outcomes included the number of patients that discontinued treatment due to treatment-related adverse events (AEs), and that discontinued treatment due to reasons other than AEs (e.g., loss to follow-up). Safety outcomes included the incidence of clinically relevant weight gain and extrapyramidal symptoms. Data were analyzed by applying a mixed treatment comparison competing risks model (efficacy) and using binary models (safety). There was no statistically significant
Linear kinetic theory and particle transport in stochastic mixtures
International Nuclear Information System (INIS)
Pomraning, G.C.
1994-03-01
The primary goal in this research is to develop a comprehensive theory of linear transport/kinetic theory in a stochastic mixture of solids and immiscible fluids. The statistics considered correspond to N-state discrete random variables for the interaction coefficients and sources, with N denoting the number of components of the mixture. The mixing statistics studied are Markovian as well as more general statistics, such as renewal processes. A further goal of this work is to demonstrate the applicability of the formalism to real world engineering problems. This three year program was initiated June 15, 1993 and has been underway nine months. Many significant results have been obtained, both in the formalism development and in representative applications. These results are summarized by listing the archival publications resulting from this grant, including the abstracts taken directly from the papers
DEFF Research Database (Denmark)
Brabrand, Helle
2010-01-01
levels than those related to building, and this exploration is a special challenge and competence implicit artistic development work. The project Mixed Movements generates drawing-material, not primary as representation, but as a performance-based media, making the body being-in-the-media felt and appear...... as possible operational moves....
2014-09-30
negative (right panel c) and the kinetic energy dissipation is larger than that expected from meterological forcing alone (right panel a). This is...10.1002/grl.50919. Shcherbina, A. et al., 2014, The LatMix Summer Campaign: Submesoscale Stirring in the Upper Ocean., Bull. American Meterological
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Jensen, Kamilla B; Morthorst, Britt R; Vendsborg, Per B; Hjorthøj, Carsten R; Nordentoft, Merete
2015-04-14
Studies show a high and growing prevalence of mental disorders in the population worldwide. 25% of the general population in Europe will during their lifetime experience symptoms related to a mental disorder. The Mental Health First Aid concept (MHFA) was founded in 2000 in Australia by Kitchener and Jorm, in order to provide the population with mental health first aid skills. The aim of the concept is, through an educational intervention (course), to increase confidence in how to help people suffering from mental health problems. Further, secondary aims are to increase the mental health literacy of the public by increasing knowledge, reduce stigma and initiate more supportive actions leading towards professional care. An investigation of the effect of MHFA offered a Danish population is needed. The design is a randomized waitlist-controlled superiority trial, in which 500 participants will be allocated to either the intervention group or the control group. The control group will attend the course six months later, hence waiting list design. From fall 2013 to spring 2014 participants will be educated to be "mental health first-aiders" following a manualized, two days MHFA course. All the participants will answer a questionnaire at base-line and at 6 months follow-up. The questionnaire is a back-translation of the questionnaire used in Australian trials. The trial will be complemented by a qualitative study, in which focus groups will be carried out. Outcomes measured are sensitive to interpretation, hence a challenge to uniform. This trial will add to the use of a mixed-methods design and exemplify how it can strengthen the analysis and take up the challenge of a sensitive outcome. https://clinicaltrials.gov identifier NCT02334020.
Puett, Chloe; Salpéteur, Cécile; Houngbe, Freddy; Martínez, Karen; N'Diaye, Dieynaba S; Tonguet-Papucci, Audrey
2018-01-01
This study assessed the costs and cost-efficiency of a mobile cash transfer implemented in Tapoa Province, Burkina Faso in the MAM'Out randomized controlled trial from June 2013 to December 2014, using mixed methods and taking a societal perspective by including costs to implementing partners and beneficiary households. Data were collected via interviews with implementing staff from the humanitarian agency and the private partner delivering the mobile money, focus group discussions with beneficiaries, and review of accounting databases. Costs were analyzed by input category and activity-based cost centers. cost-efficiency was analyzed by cost-transfer ratios (CTR) and cost per beneficiary. Qualitative analysis was conducted to identify themes related to implementing electronic cash transfers, and barriers to efficient implementation. The CTR was 0.82 from a societal perspective, within the same range as other humanitarian transfer programs; however the intervention did not achieve the same degree of cost-efficiency as other mobile transfer programs specifically. Challenges in coordination between humanitarian and private partners resulted in long wait times for beneficiaries, particularly in the first year of implementation. Sensitivity analyses indicated a potential 6% reduction in CTR through reducing beneficiary wait time by one-half. Actors reported that coordination challenges improved during the project, therefore inefficiencies likely would be resolved, and cost-efficiency improved, as the program passed the pilot phase. Despite the time required to establish trusting relationships among actors, and to set up a network of cash points in remote areas, this analysis showed that mobile transfers hold promise as a cost-efficient method of delivering cash in this setting. Implementation by local government would likely reduce costs greatly compared to those found in this study context, and improve cost-efficiency especially by subsidizing expansion of mobile
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Mixed and mixed-hybrid elements for the diffusion equation
International Nuclear Information System (INIS)
Coulomb, F.; Fedon-Magnaud, C.
1987-04-01
To solve the diffusion equation, one often uses a Lagrangian finite element method. We want to introduce the mixed elements which allow a simultaneous approximation of the same order for the flux and its gradient. Though the linear systems are not positive definite, it is possible to make them so by eliminating some of the unknowns
An Introduction to the Use of Linear Models with Correlated Data
Directory of Open Access Journals (Sweden)
Benoît Laplante
2001-12-01
conventional methods for estimating the variances of these estimates may yield biased results. These two problems are different, but they are related. This paper provides an introduction to the problems caused by correlated data and to possible solutions to these problems. First, we present the two problems and try to specify the relations between the two as clearly as possible. Second, we provide a critical presentation of random effects, mixed effects and hierarchical models that would help researchers to see their relevance in other kinds of linear models, particularly the so-called measurement models.
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Erica J Newton
Full Text Available Woodland caribou (Rangifer tarandus caribou in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic
International Nuclear Information System (INIS)
Adelberger, E.G.
1975-01-01
The field of parity mixing in light nuclei bears upon one of the exciting and active problems of physics--the nature of the fundamental weak interaction. It is also a subject where polarization techniques play a very important role. Weak interaction theory is first reviewed to motivate the parity mixing experiments. Two very attractive systems are discussed where the nuclear physics is so beautifully simple that the experimental observation of tiny effects directly measures parity violating (PV) nuclear matrix elements which are quite sensitive to the form of the basic weak interaction. Since the measurement of very small analyzing powers and polarizations may be of general interest to this conference, some discussion is devoted to experimental techniques
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Random numbers from vacuum fluctuations
International Nuclear Information System (INIS)
Shi, Yicheng; Kurtsiefer, Christian; Chng, Brenda
2016-01-01
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Random numbers from vacuum fluctuations
Energy Technology Data Exchange (ETDEWEB)
Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore); Chng, Brenda [Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore)
2016-07-25
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Linearization Technologies for Broadband Radio-Over-Fiber Transmission Systems
Directory of Open Access Journals (Sweden)
Xiupu Zhang
2014-11-01
Full Text Available Linearization technologies that can be used for linearizing RoF transmission are reviewed. Three main linearization methods, i.e. electrical analog linearization, optical linearization, and electrical digital linearization are presented and compared. Analog linearization can be achieved using analog predistortion circuits, and can be used for suppression of odd order nonlinear distortion components, such as third and fifth order. Optical linearization includes mixed-polarization, dual-wavelength, optical channelization and the others, implemented in optical domain, to suppress both even and odd order nonlinear distortion components, such as second and third order. Digital predistortion has been a widely used linearization method for RF power amplifiers. However, digital linearization that requires analog to digital converter is severely limited to hundreds of MHz bandwidth. Instead, analog and optical linearization provide broadband linearization with up to tens of GHz. Therefore, for broadband radio over fiber transmission that can be used for future broadband cloud radio access networks, analog and optical linearization are more appropriate than digital linearization. Generally speaking, both analog and optical linearization are able to improve spur-free dynamic range greater than 10 dB over tens of GHz. In order for current digital linearization to be used for broadband radio over fiber transmission, the reduced linearization complexity and increased linearization bandwidth are required. Moreover, some digital linearization methods in which the complexity can be reduced, such as Hammerstein type, may be more promising and require further investigation.
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Uniform random number generators
Farr, W. R.
1971-01-01
Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.
Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems
Directory of Open Access Journals (Sweden)
Bambang Riyanto
2005-11-01
Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.
Menu-Driven Solver Of Linear-Programming Problems
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
On the hyperbolicity condition in linear elasticity
Directory of Open Access Journals (Sweden)
Remigio Russo
1991-05-01
Full Text Available This talk, which is mainly expository and based on [2-5], discusses the hyperbolicity conditions in linear elastodynamics. Particular emphasis is devoted to the key role it plays in the uniqueness questions associated with the mixed boundary-initial value problem in unbounded domains.
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Energy Technology Data Exchange (ETDEWEB)
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Stochastic Linear Quadratic Optimal Control Problems
International Nuclear Information System (INIS)
Chen, S.; Yong, J.
2001-01-01
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Is the tribimaximal mixing accidental?
International Nuclear Information System (INIS)
Abbas, Mohammed; Smirnov, A. Yu.
2010-01-01
The tribimaximal (TBM) mixing is not accidental if structures of the corresponding leptonic mass matrices follow immediately from certain (residual or broken) flavor symmetry. We develop a simple formalism which allows one to analyze effects of deviations of the lepton mixing from TBM on the structure of the neutrino mass matrix and on the underlying flavor symmetry. We show that possible deviations from the TBM mixing can lead to strong modifications of the mass matrix and strong violation of the TBM-mass relations. As a result, the mass matrix may have an 'anarchical' structure with random values of elements or it may have some symmetry that differs from the TBM symmetry. Interesting examples include matrices with texture zeros, matrices with certain 'flavor alignment' as well as hierarchical matrices with a two-component structure, where the dominant and subdominant contributions have different symmetries. This opens up new approaches to understanding the lepton mixing.
MetabR: an R script for linear model analysis of quantitative metabolomic data
Directory of Open Access Journals (Sweden)
Ernest Ben
2012-10-01
Full Text Available Abstract Background Metabolomics is an emerging high-throughput approach to systems biology, but data analysis tools are lacking compared to other systems level disciplines such as transcriptomics and proteomics. Metabolomic data analysis requires a normalization step to remove systematic effects of confounding variables on metabolite measurements. Current tools may not correctly normalize every metabolite when the relationships between each metabolite quantity and fixed-effect confounding variables are different, or for the effects of random-effect confounding variables. Linear mixed models, an established methodology in the microarray literature, offer a standardized and flexible approach for removing the effects of fixed- and random-effect confounding variables from metabolomic data. Findings Here we present a simple menu-driven program, “MetabR”, designed to aid researchers with no programming background in statistical analysis of metabolomic data. Written in the open-source statistical programming language R, MetabR implements linear mixed models to normalize metabolomic data and analysis of variance (ANOVA to test treatment differences. MetabR exports normalized data, checks statistical model assumptions, identifies differentially abundant metabolites, and produces output files to help with data interpretation. Example data are provided to illustrate normalization for common confounding variables and to demonstrate the utility of the MetabR program. Conclusions We developed MetabR as a simple and user-friendly tool for implementing linear mixed model-based normalization and statistical analysis of targeted metabolomic data, which helps to fill a lack of available data analysis tools in this field. The program, user guide, example data, and any future news or updates related to the program may be found at http://metabr.r-forge.r-project.org/.
Mixing, entropy and competition
International Nuclear Information System (INIS)
Klimenko, A Y
2012-01-01
Non-traditional thermodynamics, applied to random behaviour associated with turbulence, mixing and competition, is reviewed and analysed. Competitive mixing represents a general framework for the study of generic properties of competitive systems and can be used to model a wide class of non-equilibrium phenomena ranging from turbulent premixed flames and invasion waves to complex competitive systems. We demonstrate consistency of the general principles of competition with thermodynamic description, review and analyse the related entropy concepts and introduce the corresponding competitive H-theorem. A competitive system can be characterized by a thermodynamic quantity—competitive potential—which determines the likely direction of evolution of the system. Contested resources tend to move between systems from lower to higher values of the competitive potential. There is, however, an important difference between conventional thermodynamics and competitive thermodynamics. While conventional thermodynamics is constrained by its zeroth law and is fundamentally transitive, the transitivity of competitive thermodynamics depends on the transitivity of the competition rules. Intransitivities are common in the real world and are responsible for complex behaviour in competitive systems. This work follows ideas and methods that have originated from the analysis of turbulent combustion, but reviews a much broader scope of issues linked to mixing and competition, including thermodynamic characterization of complex competitive systems with self-organization. The approach presented here is interdisciplinary and is addressed to the general educated readers, whereas the mathematical details can be found in the appendices. (comment)
Son, Chanhee; Park, Sanghoon; Kim, Minjeong
2011-01-01
This study compared linear text-based and non-linear hypertext-based instruction in a handheld computer regarding effects on two different levels of knowledge (declarative and structural knowledge) and learner motivation. Forty four participants were randomly assigned to one of three experimental conditions: linear text, hierarchical hypertext,…
Combinatorial therapy discovery using mixed integer linear programming.
Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong
2014-05-15
Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
On randomly interrupted diffusion
International Nuclear Information System (INIS)
Luczka, J.
1993-01-01
Processes driven by randomly interrupted Gaussian white noise are considered. An evolution equation for single-event probability distributions in presented. Stationary states are considered as a solution of a second-order ordinary differential equation with two imposed conditions. A linear model is analyzed and its stationary distributions are explicitly given. (author). 10 refs
Random eigenvalue problems revisited
Indian Academy of Sciences (India)
statistical distributions; linear stochastic systems. 1. ... dimensional multivariate Gaussian random vector with mean µ ∈ Rm and covariance ... 5, the proposed analytical methods are applied to a three degree-of-freedom system and the ...... The joint pdf ofω1 andω3 is however close to a bivariate Gaussian density function.
Mixed models approaches for joint modeling of different types of responses.
Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert
2016-01-01
In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.
Badly approximable systems of linear forms in absolute value
DEFF Research Database (Denmark)
Hussain, M.; Kristensen, Simon
In this paper we show that the set of mixed type badly approximable simultaneously small linear forms is of maximal dimension. As a consequence of this theorem we settle the conjecture stated in [9]....
Non linear system become linear system
Directory of Open Access Journals (Sweden)
Petre Bucur
2007-01-01
Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.
Linear motor coil assembly and linear motor
2009-01-01
An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially
Energy Technology Data Exchange (ETDEWEB)
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Blyth, T S
2002-01-01
Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...
International Nuclear Information System (INIS)
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center
Matrices and linear transformations
Cullen, Charles G
1990-01-01
""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first
Efficient Non Linear Loudspeakers
DEFF Research Database (Denmark)
Petersen, Bo R.; Agerkvist, Finn T.
2006-01-01
Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh
Faraway, Julian J
2014-01-01
A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz
Carr, Joseph
1996-01-01
The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Superconducting linear accelerator cryostat
International Nuclear Information System (INIS)
Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.
1984-01-01
A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)
A brief introduction to regression designs and mixed-effects modelling by a recent convert
DEFF Research Database (Denmark)
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...
Linear signal noise summer accurately determines and controls S/N ratio
Sundry, J. L.
1966-01-01
Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.
Energy Technology Data Exchange (ETDEWEB)
Patten, B.C.
1983-04-01
Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.
Evolution of the concentration PDF in random environments modeled by global random walk
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and
Nanoscale Mixing of Soft Solids
International Nuclear Information System (INIS)
Choi, Soo-Hyung; Lee, Sangwoo; Soto, Haidy E.; Lodge, Timothy P.; Bates, Frank S.
2011-01-01
Assessing the state of mixing on the molecular scale in soft solids is challenging. Concentrated solutions of micelles formed by self-assembly of polystyrene-block-poly(ethylene-alt-propylene) (PS-PEP) diblock copolymers in squalane (C 30 H 62 ) adopt a body-centered cubic (bcc) lattice, with glassy PS cores. Utilizing small-angle neutron scattering (SANS) and isotopic labeling ( 1 H and 2 H (D) polystyrene blocks) in a contrast-matching solvent (a mixture of squalane and perdeuterated squalane), we demonstrate quantitatively the remarkable fact that a commercial mixer can create completely random mixtures of micelles with either normal, PS(H), or deuterium-labeled, PS(D), cores on a well-defined bcc lattice. The resulting SANS intensity is quantitatively modeled by the form factor of a single spherical core. These results demonstrate both the possibility of achieving complete nanoscale mixing in a soft solid and the use of SANS to quantify the randomness.
Linear colliders - prospects 1985
International Nuclear Information System (INIS)
Rees, J.
1985-06-01
We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs
International Nuclear Information System (INIS)
Richter, B.
1985-01-01
A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built
International Nuclear Information System (INIS)
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
International Nuclear Information System (INIS)
Rowe, C.H.; Wilton, M.S. de.
1979-01-01
An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)
Vectorized Matlab Codes for Linear Two-Dimensional Elasticity
Directory of Open Access Journals (Sweden)
Jonas Koko
2007-01-01
Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.
Lincx: A Linear Logical Framework with First-class Contexts
DEFF Research Database (Denmark)
Linn Georges, Aina; Murawska, Agata; Otis, Shawn
2017-01-01
Linear logic provides an elegant framework for modelling stateful, imperative and concurrent systems by viewing a context of assumptions as a set of resources. However, mechanizing the meta-theory of such systems remains a challenge, as we need to manage and reason about mixed contexts of linear...
Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.
2014-01-01
In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to
Semidefinite linear complementarity problems
International Nuclear Information System (INIS)
Eckhardt, U.
1978-04-01
Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de
Axler, Sheldon
2015-01-01
This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...
Cooperation in two-dimensional mixed-games
International Nuclear Information System (INIS)
Amaral, Marco A; Silva, Jafferson K L da; Wardil, Lucas
2015-01-01
Evolutionary game theory is a common framework to study the evolution of cooperation, where it is usually assumed that the same game is played in all interactions. Here, we investigate a model where the game that is played by two individuals is uniformly drawn from a sample of two different games. Using the master equation approach we show that the random mixture of two games is equivalent to play the average game when (i) the strategies are statistically independent of the game distribution and (ii) the transition rates are linear functions of the payoffs. We also use Monte-Carlo simulations in a two-dimensional lattice and mean-field techniques to investigate the scenario when the two above conditions do not hold. We find that even outside of such conditions, several quantities characterizing the mixed-games are still the same as the ones obtained in the average game when the two games are not very different. (paper)
International Nuclear Information System (INIS)
Lumay, G; Vandewalle, N
2007-01-01
We present an experimental protocol that allows one to tune the packing fraction η of a random pile of ferromagnetic spheres from a value close to the lower limit of random loose packing η RLP ≅0.56 to the upper limit of random close packing η RCP ≅0.64. This broad range of packing fraction values is obtained under normal gravity in air, by adjusting a magnetic cohesion between the grains during the formation of the pile. Attractive and repulsive magnetic interactions are found to affect stongly the internal structure and the stability of sphere packing. After the formation of the pile, the induced cohesion is decreased continuously along a linear decreasing ramp. The controlled collapse of the pile is found to generate various and reproducible values of the random packing fraction η