REGRESSION ANALYSIS OF PRODUCTIVITY USING MIXED EFFECT MODEL
Siana Halim
2007-01-01
Full Text Available Production plants of a company are located in several areas that spread across Middle and East Java. As the production process employs mostly manpower, we suspected that each location has different characteristics affecting the productivity. Thus, the production data may have a spatial and hierarchical structure. For fitting a linear regression using the ordinary techniques, we are required to make some assumptions about the nature of the residuals i.e. independent, identically and normally distributed. However, these assumptions were rarely fulfilled especially for data that have a spatial and hierarchical structure. We worked out the problem using mixed effect model. This paper discusses the model construction of productivity and several characteristics in the production line by taking location as a random effect. The simple model with high utility that satisfies the necessary regression assumptions was built using a free statistic software R version 2.6.1.
FMEM: functional mixed effects modeling for the analysis of longitudinal white matter Tract data.
Yuan, Ying; Gilmore, John H; Geng, Xiujuan; Martin, Styner; Chen, Kehui; Wang, Jane-ling; Zhu, Hongtu
2014-01-01
Many longitudinal imaging studies have collected repeated diffusion tensor magnetic resonance imaging data to understand white matter maturation and structural connectivity pattern in normal controls and diseased subjects. There is an urgent demand for the development of statistical methods for the analysis of diffusion properties along fiber tracts and clinical data obtained from longitudinal studies. Jointly analyzing repeated fiber-tract diffusion properties and covariates (e.g., age or gender) raises several major challenges including (i) infinite-dimensional functional response data, (ii) complex spatial-temporal correlation structure, and (iii) complex spatial smoothness. To address these challenges, this article is to develop a functional mixed effects modeling (FMEM) framework to delineate the dynamic changes of diffusion properties along major fiber tracts and their association with a set of covariates of interest and the structure of the variability of these white matter tract properties in various longitudinal studies. Our FMEM consists of a functional mixed effects model for addressing all three challenges, an efficient method for spatially smoothing varying coefficient functions, an estimation method for estimating the spatial-temporal correlation structure, a test procedure with local and global test statistics for testing hypotheses of interest associated with functional response, and a simultaneous confidence band for quantifying the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FMEM and to demonstrate that FMEM significantly outperforms the standard pointwise mixed effects modeling approach. We apply FMEM to study the spatial-temporal dynamics of white-matter fiber tracts in a clinical study of neurodevelopment.
Design and analysis of Q-RT-PCR assays for haematological malignancies using mixed effects models
Bøgsted, Martin; Mandrup, Charlotte; Petersen, Anders;
research use and needs qualit control for accuracy and precision. Especially the identification of experimental variations and statistical analysis has recently created discussions. The standard analytical technique is to use the Delta-Delta-Ct method. Although this method accounts for sample specific...... variations such as RNA purification, it does not account for other experimental effects as variations in cDNA synthesis, amplification efficiency and assay variations. To obtain an assessment of the accuracy and precision of the assays a novel approach for the statistical analysis of Q-RT-PCR has been...... developed based on a linear mixed effects model for factorial designs. The model consists of an analysis of variance where the variation of each fixed effect of interest and identified experimental and biological nuisance variations are split. Hereby it accounts for varying efficiency, inhomogeneous...
Jensen, Kasper Lynge; Spliid, Henrik; Toftum, Jørn
2011-01-01
The aim of the current study was to apply multivariate mixed-effects modeling to analyze experimental data on the relation between air quality and the performance of office work. The method estimates in one step the effect of the exposure on a multi-dimensional response variable, and yields impor....... The analysis seems superior to conventional univariate statistics and the information provided may be important for the design of performance experiments in general and for the conclusions that can be based on such studies.......The aim of the current study was to apply multivariate mixed-effects modeling to analyze experimental data on the relation between air quality and the performance of office work. The method estimates in one step the effect of the exposure on a multi-dimensional response variable, and yields...... important information on the correlation between the different dimensions of the response variable, which in this study was composed of both subjective perceptions and a two-dimensional performance task outcome. Such correlation is typically not included in the output from univariate analysis methods. Data...
A framework for meta-analysis of veterinary drug pharmacokinetic data using mixed effect modeling.
Li, Mengjie; Gehring, Ronette; Lin, Zhoumeng; Riviere, Jim
2015-04-01
Combining data from available studies is a useful approach to interpret the overwhelming amount of data generated in medical research from multiple studies. Paradoxically, in veterinary medicine, lack of data requires integrating available data to make meaningful population inferences. Nonlinear mixed-effects modeling is a useful tool to apply meta-analysis to diverse pharmacokinetic (PK) studies of veterinary drugs. This review provides a summary of the characteristics of PK data of veterinary drugs and how integration of these data may differ from human PK studies. The limits of meta-analysis include the sophistication of data mining, and generation of misleading results caused by biased or poor quality data. The overriding strength of meta-analysis applied to this field is that robust statistical analysis of the diverse sparse data sets inherent to veterinary medicine applications can be accomplished, thereby allowing population inferences to be made.
Kriging with mixed effects models
Alessio Pollice
2007-10-01
Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.
A Bayesian based functional mixed-effects model for analysis of LC-MS data.
Befekadu, Getachew K; Tadesse, Mahlet G; Ressom, Habtom W
2009-01-01
A Bayesian multilevel functional mixed-effects model with group specific random-effects is presented for analysis of liquid chromatography-mass spectrometry (LC-MS) data. The proposed framework allows alignment of LC-MS spectra with respect to both retention time (RT) and mass-to-charge ratio (m/z). Affine transformations are incorporated within the model to account for any variability along the RT and m/z dimensions. Simultaneous posterior inference of all unknown parameters is accomplished via Markov chain Monte Carlo method using the Gibbs sampling algorithm. The proposed approach is computationally tractable and allows incorporating prior knowledge in the inference process. We demonstrate the applicability of our approach for alignment of LC-MS spectra based on total ion count profiles derived from two LC-MS datasets.
Mixed-effects state-space models for analysis of longitudinal dynamic systems.
Liu, Dacheng; Lu, Tao; Niu, Xu-Feng; Wu, Hulin
2011-06-01
The rapid development of new biotechnologies allows us to deeply understand biomedical dynamic systems in more detail and at a cellular level. Many of the subject-specific biomedical systems can be described by a set of differential or difference equations that are similar to engineering dynamic systems. In this article, motivated by HIV dynamic studies, we propose a class of mixed-effects state-space models based on the longitudinal feature of dynamic systems. State-space models with mixed-effects components are very flexible in modeling the serial correlation of within-subject observations and between-subject variations. The Bayesian approach and the maximum likelihood method for standard mixed-effects models and state-space models are modified and investigated for estimating unknown parameters in the proposed models. In the Bayesian approach, full conditional distributions are derived and the Gibbs sampler is constructed to explore the posterior distributions. For the maximum likelihood method, we develop a Monte Carlo EM algorithm with a Gibbs sampler step to approximate the conditional expectations in the E-step. Simulation studies are conducted to compare the two proposed methods. We apply the mixed-effects state-space model to a data set from an AIDS clinical trial to illustrate the proposed methodologies. The proposed models and methods may also have potential applications in other biomedical system analyses such as tumor dynamics in cancer research and genetic regulatory network modeling. © 2010, The International Biometric Society.
The value of a statistical life: a meta-analysis with a mixed effects regression model.
Bellavance, François; Dionne, Georges; Lebeau, Martin
2009-03-01
The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.
Bilgel, Murat; Prince, Jerry L; Wong, Dean F; Resnick, Susan M; Jedynak, Bruno M
2016-07-01
It is important to characterize the temporal trajectories of disease-related biomarkers in order to monitor progression and identify potential points of intervention. These are especially important for neurodegenerative diseases, as therapeutic intervention is most likely to be effective in the preclinical disease stages prior to significant neuronal damage. Neuroimaging allows for the measurement of structural, functional, and metabolic integrity of the brain at the level of voxels, whose volumes are on the order of mm(3). These voxelwise measurements provide a rich collection of disease indicators. Longitudinal neuroimaging studies enable the analysis of changes in these voxelwise measures. However, commonly used longitudinal analysis approaches, such as linear mixed effects models, do not account for the fact that individuals enter a study at various disease stages and progress at different rates, and generally consider each voxelwise measure independently. We propose a multivariate nonlinear mixed effects model for estimating the trajectories of voxelwise neuroimaging biomarkers from longitudinal data that accounts for such differences across individuals. The method involves the prediction of a progression score for each visit based on a collective analysis of voxelwise biomarker data within an expectation-maximization framework that efficiently handles large amounts of measurements and variable number of visits per individual, and accounts for spatial correlations among voxels. This score allows individuals with similar progressions to be aligned and analyzed together, which enables the construction of a trajectory of brain changes as a function of an underlying progression or disease stage. We apply our method to studying cortical β-amyloid deposition, a hallmark of preclinical Alzheimer's disease, as measured using positron emission tomography. Results on 104 individuals with a total of 300 visits suggest that precuneus is the earliest cortical region to
Mixed Effects Models for Complex Data
Wu, Lang
2009-01-01
Presenting effective approaches to address missing data, measurement errors, censoring, and outliers in longitudinal data, this book covers linear, nonlinear, generalized linear, nonparametric, and semiparametric mixed effects models. It links each mixed effects model with the corresponding class of regression model for cross-sectional data and discusses computational strategies for likelihood estimations of mixed effects models. The author briefly describes generalized estimating equations methods and Bayesian mixed effects models and explains how to implement standard models using R and S-Pl
Mixed-Effects State Space Models for Analysis of Longitudinal Dynamic Systems
Liu, Dacheng; Lu, Tao; Niu, Xu-Feng; Wu, Hulin
2010-01-01
The rapid development of new biotechnologies allows us to deeply understand biomedical dynamic systems in more detail and at a cellular level. Many of the subject-specific biomedical systems can be described by a set of differential or difference equations which are similar to engineering dynamic systems. In this paper, motivated by HIV dynamic studies, we propose a class of mixed-effects state space models based on the longitudinal feature of dynamic systems. State space models with mixed-ef...
Marginal and mixed-effects models in the analysis of human papillomavirus natural history data.
Xue, Xiaonan; Gange, Stephen J; Zhong, Ye; Burk, Robert D; Minkoff, Howard; Massad, L Stewart; Watts, D Heather; Kuniholm, Mark H; Anastos, Kathryn; Levine, Alexandra M; Fazzari, Melissa; D'Souza, Gypsyamber; Plankey, Michael; Palefsky, Joel M; Strickler, Howard D
2010-01-01
Human papillomavirus (HPV) natural history has several characteristics that, at least from a statistical perspective, are not often encountered elsewhere in infectious disease and cancer research. There are, for example, multiple HPV types, and infection by each HPV type may be considered separate events. Although concurrent infections are common, the prevalence, incidence, and duration/persistence of each individual HPV can be separately measured. However, repeated measures involving the same subject tend to be correlated. The probability of detecting any given HPV type, for example, is greater among individuals who are currently positive for at least one other HPV type. Serial testing for HPV over time represents a second form of repeated measures. Statistical inferences that fail to take these correlations into account would be invalid. However, methods that do not use all the data would be inefficient. Marginal and mixed-effects models can address these issues but are not frequently used in HPV research. The current study provides an overview of these methods and then uses HPV data from a cohort of HIV-positive women to illustrate how they may be applied, and compare their results. The findings show the greater efficiency of these models compared with standard logistic regression and Cox models. Because mixed-effects models estimate subject-specific associations, they sometimes gave much higher effect estimates than marginal models, which estimate population-averaged associations. Overall, the results show that marginal and mixed-effects models are efficient for studying HPV natural history, but also highlight the importance of understanding how these models differ.
Hadjipantelis, P Z; Aston, J A D; Müller, H G; Evans, J P
2015-04-03
Mandarin Chinese is characterized by being a tonal language; the pitch (or F0) of its utterances carries considerable linguistic information. However, speech samples from different individuals are subject to changes in amplitude and phase, which must be accounted for in any analysis that attempts to provide a linguistically meaningful description of the language. A joint model for amplitude, phase, and duration is presented, which combines elements from functional data analysis, compositional data analysis, and linear mixed effects models. By decomposing functions via a functional principal component analysis, and connecting registration functions to compositional data analysis, a joint multivariate mixed effect model can be formulated, which gives insights into the relationship between the different modes of variation as well as their dependence on linguistic and nonlinguistic covariates. The model is applied to the COSPRO-1 dataset, a comprehensive database of spoken Taiwanese Mandarin, containing approximately 50,000 phonetically diverse sample F0 contours (syllables), and reveals that phonetic information is jointly carried by both amplitude and phase variation. Supplementary materials for this article are available online.
Nonlinear Mixed-Effects Models for Repairable Systems Reliability
TAN Fu-rong; JIANG Zhi-bin; KUO Way; Suk Joo BAE
2007-01-01
Mixed-effects models, also called random-effects models, are a regression type of analysis which enables the analyst to not only describe the trend over time within each subject, but also to describe the variation among different subjects. Nonlinear mixed-effects models provide a powerful and flexible tool for handling the unbalanced count data. In this paper, nonlinear mixed-effects models are used to analyze the failure data from a repairable system with multiple copies. By using this type of models, statistical inferences about the population and all copies can be made when accounting for copy-to-copy variance. Results of fitting nonlinear mixed-effects models to nine failure-data sets show that the nonlinear mixed-effects models provide a useful tool for analyzing the failure data from multi-copy repairable systems.
Mitsumata, Kaneto; Saitoh, Shigeyuki; Ohnishi, Hirofumi; Akasaka, Hiroshi; Miura, Tetsuji
2012-11-01
The mechanism underlying the association of parental hypertension with cardiovascular events in offspring remains unclear. In this study, the effects of parental hypertension on longitudinal trends of blood pressure and metabolic parameters were examined by mixed-effects model analysis. From 1977 to 2006, 5198 subjects participated in the Tanno-Sobetsu Study, and we selected 2607 subjects (1095 men and 1512 women) for whom data on parental history of hypertension were available. In both men and women with and without parental hypertension, systolic blood pressure and fasting blood glucose levels consistently increased from the third to eighth decades of life, whereas diastolic blood pressure and serum triglyceride levels followed biphasic (inverted U shape) time courses during that period. However, the relationships between the parameters and age were significantly shifted upward (by ≈5.3 mm Hg in systolic blood pressure, 2.8 mm Hg in diastolic blood pressure, 0.30 mmol/L in blood glucose, and 0.09 mmol/L in triglyceride) in the group with parental hypertension compared with those in the group without parental hypertension. Both paternal and maternal histories of hypertension were determinants of systolic blood pressure and diastolic blood pressure, and there was no significant interaction between the sides of parental history. There were no significant effects of parental hypertension on age-dependent or body mass index-dependent changes in serum low-density lipoprotein cholesterol or high-density lipoprotein cholesterol level. The present results indicate that parental hypertension has an age-independent impact on elevation of blood pressure, plasma glucose, and triglyceride levels, which may underlie the reported increase in cardiovascular events by family history of hypertension.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2014-05-01
Dose-response studies are performed to investigate the potency of a compound. EC50 is the concentration of the compound that gives half-maximal response. Dose-response data are typically evaluated by using a log-logistic model that includes EC50 as one of the model parameters. Often, more than one experiment is carried out to determine the EC50 value for a compound, requiring summarization of EC50 estimates from a series of experiments. In this context, mixed-effects models are designed to estimate the average behavior of EC50 values over all experiments by considering the variabilities within and among experiments simultaneously. However, fitting nonlinear mixed-effects models is more complicated than in a linear situation, and convergence problems are often encountered. An alternative strategy is the application of a meta-analysis approach, which combines EC50 estimates obtained from separate log-logistic model fitting. These two proposed strategies to summarize EC50 estimates from multiple experiments are compared in a simulation study and real data example. We conclude that the meta-analysis strategy is a simple and robust method to summarize EC50 estimates from multiple experiments, especially suited in the case of a small number of experiments.
Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik
2004-01-01
The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...
Kliem, Sören; Kröger, Christoph
2013-11-01
Post-traumatic stress disorder (PTSD) is of great interest to public health, due to the high burden it places on both the individual and society. We meta-analyzed randomized-controlled trials to examine the effectiveness of early trauma-focused cognitive-behavioral treatment (TFCBT) for preventing chronic PTSD. Systematic bibliographic research was undertaken to find relevant literature from on-line databases (Pubmed, PsycINFO, Psyndex, Medline). Using a mixed-effect approach, we calculated effect sizes (ES) for the PTSD diagnoses (main outcome) as well as PTSD and depressive symptoms (secondary outcomes), respectively. Calculations of ES from pre-intervention to first follow-up assessment were based on 10 studies. A moderate effect (ES = 0.54) was found for the main outcome, whereas ES for secondary outcomes were predominantly small (ES = 0.27-0.45). The ES for the main outcome decreased to small (ES = 0.34) from first follow-up to long-term follow-up assessment. The mean dropout rate was 16.7% pre- to post-treatment. There was evidence for the impact of moderators on different outcomes (e.g., the number of sessions on PTSD symptoms). Future studies should include survivors of other trauma types (e.g., burn injuries) rather than predominantly survivors of accidents and physical assault, and should compare early TFCBT with other interventions that previously demonstrated effectiveness.
A Mixed Effects Randomized Item Response Model
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C
2016-01-01
... a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix...
Longitudinal mixed-effects models for latent cognitive function
Hout, van den Ardo; Fox, Jean-Paul; Muniz-Terrera, Graciela
2015-01-01
A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response t
Kinetic mixing effect in the 3 -3 -1 -1 model
Dong, P. V.; Si, D. T.
2016-06-01
We show that the mixing effect of the neutral gauge bosons in the 3 -3 -1 -1 model comes from two sources. The first one is due to the 3 -3 -1 -1 gauge symmetry breaking as usual, whereas the second one results from the kinetic mixing between the gauge bosons of U (1 )X and U (1 )N groups, which are used to determine the electric charge and baryon minus lepton numbers, respectively. Such mixings modify the ρ -parameter and the known couplings of Z with fermions. The constraints that arise from flavor-changing neutral currents due to the gauge boson mixings and nonuniversal fermion generations are also given.
Bello, Nora M; Steibel, Juan P; Tempelman, Robert J
2010-06-01
Bivariate mixed effects models are often used to jointly infer upon covariance matrices for both random effects (u) and residuals (e) between two different phenotypes in order to investigate the architecture of their relationship. However, these (co)variances themselves may additionally depend upon covariates as well as additional sets of exchangeable random effects that facilitate borrowing of strength across a large number of clusters. We propose a hierarchical Bayesian extension of the classical bivariate mixed effects model by embedding additional levels of mixed effects modeling of reparameterizations of u-level and e-level (co)variances between two traits. These parameters are based upon a recently popularized square-root-free Cholesky decomposition and are readily interpretable, each conveniently facilitating a generalized linear model characterization. Using Markov Chain Monte Carlo methods, we validate our model based on a simulation study and apply it to a joint analysis of milk yield and calving interval phenotypes in Michigan dairy cows. This analysis indicates that the e-level relationship between the two traits is highly heterogeneous across herds and depends upon systematic herd management factors.
Latent Fundamentals Arbitrage with a Mixed Effects Factor Model
Andrei Salem Gonçalves
2012-09-01
Full Text Available We propose a single-factor mixed effects panel data model to create an arbitrage portfolio that identifies differences in firm-level latent fundamentals. Furthermore, we show that even though the characteristics that affect returns are unknown variables, it is possible to identify the strength of the combination of these latent fundamentals for each stock by following a simple approach using historical data. As a result, a trading strategy that bought the stocks with the best fundamentals (strong fundamentals portfolio and sold the stocks with the worst ones (weak fundamentals portfolio realized significant risk-adjusted returns in the U.S. market for the period between July 1986 and June 2008. To ensure robustness, we performed sub period and seasonal analyses and adjusted for trading costs and we found further empirical evidence that using a simple investment rule, that identified these latent fundamentals from the structure of past returns, can lead to profit.
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...
Cheung, Mike W.-L.
2008-01-01
Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…
Petras Rupšys
2015-01-01
Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.
Lu, Tao; Liang, Hua; Li, Hongzhe; Wu, Hulin
2011-01-01
Gene regulation is a complicated process. The interaction of many genes and their products forms an intricate biological network. Identification of this dynamic network will help us understand the biological process in a systematic way. However, the construction of such a dynamic network is very challenging for a high-dimensional system. In this article we propose to use a set of ordinary differential equations (ODE), coupled with dimensional reduction by clustering and mixed-effects modeling techniques, to model the dynamic gene regulatory network (GRN). The ODE models allow us to quantify both positive and negative gene regulations as well as feedback effects of one set of genes in a functional module on the dynamic expression changes of the genes in another functional module, which results in a directed graph network. A five-step procedure, Clustering, Smoothing, regulation Identification, parameter Estimates refining and Function enrichment analysis (CSIEF) is developed to identify the ODE-based dynamic GRN. In the proposed CSIEF procedure, a series of cutting-edge statistical methods and techniques are employed, that include non-parametric mixed-effects models with a mixture distribution for clustering, nonparametric mixed-effects smoothing-based methods for ODE models, the smoothly clipped absolute deviation (SCAD)-based variable selection, and stochastic approximation EM (SAEM) approach for mixed-effects ODE model parameter estimation. The key step, the SCAD-based variable selection of the proposed procedure is justified by investigating its asymptotic properties and validated by Monte Carlo simulations. We apply the proposed method to identify the dynamic GRN for yeast cell cycle progression data. We are able to annotate the identified modules through function enrichment analyses. Some interesting biological findings are discussed. The proposed procedure is a promising tool for constructing a general dynamic GRN and more complicated dynamic networks.
Thorsted, A; Thygesen, P; Agersø, H;
2016-01-01
BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...
Nikoloulopoulos, Aristidis K
2015-12-20
Diagnostic test accuracy studies typically report the number of true positives, false positives, true negatives and false negatives. There usually exists a negative association between the number of true positives and true negatives, because studies that adopt less stringent criterion for declaring a test positive invoke higher sensitivities and lower specificities. A generalized linear mixed model (GLMM) is currently recommended to synthesize diagnostic test accuracy studies. We propose a copula mixed model for bivariate meta-analysis of diagnostic test accuracy studies. Our general model includes the GLMM as a special case and can also operate on the original scale of sensitivity and specificity. Summary receiver operating characteristic curves are deduced for the proposed model through quantile regression techniques and different characterizations of the bivariate random effects distribution. Our general methodology is demonstrated with an extensive simulation study and illustrated by re-analysing the data of two published meta-analyses. Our study suggests that there can be an improvement on GLMM in fit to data and makes the argument for moving to copula random effects models. Our modelling framework is implemented in the package CopulaREMADA within the open source statistical environment R.
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C
2016-03-01
As the importance of pharmacometric analysis increases, more and more complex mathematical models are introduced and computational error resulting from computational instability starts to become a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix. Roughly speaking, the method reparameterises the model with a linear combination of the original model parameters so that the Hessian matrix of the likelihood of the reparameterised model becomes close to an identity matrix. This approach will reduce the influence of computational error, for example rounding error, to the final computational result. We present numerical experiments demonstrating that the stabilisation of the computation using the proposed method can recover failed variance-covariance matrix computations, and reveal non-identifiability of the model parameters.
Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France
2012-01-01
International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...
Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.
2014-01-01
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…
Chen, Yuh-Ing; Huang, Chi-Shen
2014-02-28
In the pharmacokinetic (PK) study under a 2x2 crossover design that involves both the test and reference drugs, we propose a mixed-effects model for the drug concentration-time profiles obtained from subjects who receive different drugs at different periods. In the proposed model, the drug concentrations repeatedly measured from the same subject at different time points are distributed according to a multivariate generalized gamma distribution, and the drug concentration-time profiles are described by a compartmental PK model with between-subject and within-subject variations. We then suggest a bioequivalence test based on the estimated bioavailability parameters in the proposed mixed-effects model. The results of a Monte Carlo study further show that the proposed model-based bioequivalence test is not only better on maintaining its level but also more powerful for detecting the bioequivalence of the two drugs than the conventional bioequivalence test based on a non-compartmental analysis or the one based on a mixed-effects model with a normal error variable. The application of the proposed model and test is finally illustrated by using data sets in two PK studies.
robustlmm: An R Package for Robust Estimation of Linear Mixed-Effects Models
Manuel Koller
2016-12-01
Full Text Available As any real-life data, data modeled by linear mixed-effects models often contain outliers or other contamination. Even little contamination can drive the classic estimates far away from what they would be without the contamination. At the same time, datasets that require mixed-effects modeling are often complex and large. This makes it difficult to spot contamination. Robust estimation methods aim to solve both problems: to provide estimates where contamination has only little influence and to detect and flag contamination. We introduce an R package, robustlmm, to robustly fit linear mixed-effects models. The package's functions and methods are designed to closely equal those offered by lme4, the R package that implements classic linear mixed-effects model estimation in R. The robust estimation method in robustlmm is based on the random effects contamination model and the central contamination model. Contamination can be detected at all levels of the data. The estimation method does not make any assumption on the data's grouping structure except that the model parameters are estimable. robustlmm supports hierarchical and non-hierarchical (e.g., crossed grouping structures. The robustness of the estimates and their asymptotic efficiency is fully controlled through the function interface. Individual parts (e.g., fixed effects and variance components can be tuned independently. In this tutorial, we show how to fit robust linear mixed-effects models using robustlmm, how to assess the model fit, how to detect outliers, and how to compare different fits.
Functional-mixed effects models for candidate genetic mapping in imaging genetic studies.
Lin, Ja-An; Zhu, Hongtu; Mihye, Ahn; Sun, Wei; Ibrahim, Joseph G
2014-12-01
The aim of this paper is to develop a functional-mixed effects modeling (FMEM) framework for the joint analysis of high-dimensional imaging data in a large number of locations (called voxels) of a three-dimensional volume with a set of genetic markers and clinical covariates. Our FMEM is extremely useful for efficiently carrying out the candidate gene approaches in imaging genetic studies. FMEM consists of two novel components including a mixed effects model for modeling nonlinear genetic effects on imaging phenotypes by introducing the genetic random effects at each voxel and a jumping surface model for modeling the variance components of the genetic random effects and fixed effects as piecewise smooth functions of the voxels. Moreover, FMEM naturally accommodates the correlation structure of the genetic markers at each voxel, while the jumping surface model explicitly incorporates the intrinsically spatial smoothness of the imaging data. We propose a novel two-stage adaptive smoothing procedure to spatially estimate the piecewise smooth functions, particularly the irregular functional genetic variance components, while preserving their edges among different piecewise-smooth regions. We develop weighted likelihood ratio tests and derive their exact approximations to test the effect of the genetic markers across voxels. Simulation studies show that FMEM significantly outperforms voxel-wise approaches in terms of higher sensitivity and specificity to identify regions of interest for carrying out candidate genetic mapping in imaging genetic studies. Finally, FMEM is used to identify brain regions affected by three candidate genes including CR1, CD2AP, and PICALM, thereby hoping to shed light on the pathological interactions between these candidate genes and brain structure and function.
Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Rupšys, P.
2015-10-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Magezi, David A.
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Identifying differentially methylated genes using mixed effect and generalized least square models
Yan Pearlly S
2009-12-01
Full Text Available Abstract Background DNA methylation plays an important role in the process of tumorigenesis. Identifying differentially methylated genes or CpG islands (CGIs associated with genes between two tumor subtypes is thus an important biological question. The methylation status of all CGIs in the whole genome can be assayed with differential methylation hybridization (DMH microarrays. However, patient samples or cell lines are heterogeneous, so their methylation pattern may be very different. In addition, neighboring probes at each CGI are correlated. How these factors affect the analysis of DMH data is unknown. Results We propose a new method for identifying differentially methylated (DM genes by identifying the associated DM CGI(s. At each CGI, we implement four different mixed effect and generalized least square models to identify DM genes between two groups. We compare four models with a simple least square regression model to study the impact of incorporating random effects and correlations. Conclusions We demonstrate that the inclusion (or exclusion of random effects and the choice of correlation structures can significantly affect the results of the data analysis. We also assess the false discovery rate of different models using CGIs associated with housekeeping genes.
Review of: Mixed Effects Models and Extensions in Ecology with R
Royle, J. Andrew
2013-01-01
This is a review of the book "Mixed Effects Models and Extensions in Ecology with R" by Zuur, Ieno, Walker, Saveliev and Smith (2009, Springer). I was asked to review this book for The American Statistician in 2010. After I wrote the review, the invitation was revoked. This is the review.
Thorsted, Anders; Thygesen, Peter; Agersø, Henrik
2016-01-01
BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...... was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... a clinically relevant biomarker, IGF-1, to a primary clinical end-point, growth/bodyweight gain. Scaling of the model parameters provided robust predictions of the human PKPD in growth hormone-deficient patients including variability....
Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France
2012-05-20
Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.
Leon, Andrew C; Heo, Moonseong
2009-01-15
Mixed-effects linear regression models have become more widely used for analysis of repeatedly measured outcomes in clinical trials over the past decade. There are formulae and tables for estimating sample sizes required to detect the main effects of treatment and the treatment by time interactions for those models. A formula is proposed to estimate the sample size required to detect an interaction between two binary variables in a factorial design with repeated measures of a continuous outcome. The formula is based, in part, on the fact that the variance of an interaction is fourfold that of the main effect. A simulation study examines the statistical power associated with the resulting sample sizes in a mixed-effects linear regression model with a random intercept. The simulation varies the magnitude (Δ) of the standardized main effects and interactions, the intraclass correlation coefficient (ρ ), and the number (k) of repeated measures within-subject. The results of the simulation study verify that the sample size required to detect a 2 × 2 interaction in a mixed-effects linear regression model is fourfold that to detect a main effect of the same magnitude.
Comparative pharmacokinetic analysis based on nonlinear mixed effect model%基于非线性混合效应模型的比较药动学分析方法研究
李禄金; 李宪星; 许羚; 吕映华; 陈君超; 郑青山
2011-01-01
比较药动学研究贯穿药物研发的整个阶段,通过求算个体药动学参数,推测各处理因素间AUC、Cmax比值的90%置信区间,然后与事先设定的等效区间进行比较,最终判断各处理因素间是否等效,为用药剂量的合理调整提供依据.然而,很多比较药动学研究为稀疏采样设计,传统的统计矩法很难对个体药动学参数进行估计,此时需要借助群体药动学的计算方法,利用非线性混合效应模型进行计算.本研究在密集采样设计比较药动学研究实例基础之上,模拟稀疏采样过程,对稀疏数据采用非线性混合效应模型分析,原密集数据采用统计矩法分析,通过Bootstrap法1000次重复抽样,最终比较两种方法所得参数的90%置信区间.结果表明非线性混合效应模型对稀疏数据处理结果可靠,与统计矩法计算结果一致,为此类比较药动学研究提供了参考.%Comparative pharmacokinetic (PK) analysis is often carried out throughout the entire period of drug development the common approach for the assessment of pharmacokinetics between different treatments requires that the individual PK parameters, which employs estimation of 90％ confidence intervals for the ratio of average parameters, such as AUC and Cmax, these 90％ confidence intervals then need to be compared with the pre-specified equivalent interval, and last we determine whether the two treatments are equivalent. Unfortunately in many clinical circumstances. some or even all of the individuals can only be sparsely sampled, making the individual evaluation difficult by the conventional non-compartmental analysis. In such cases, nonlinear mixed effect model (NONMEM) could be applied to analyze the sparse data. In this article, we simulated a sparsety sampling design trial based on the dense sampling data from a truly comparative PK study. The sparse data were analyzed with NONMEM method, and the original dense data were analyzed with non
Empirical Likelihood for Mixed-effects Error-in-variables Model
Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui
2009-01-01
This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.
A nonlinear mixed-effects model for degradation data obtained from in-service inspections
Yuan, X.-X. [Department of Civil and Environmental Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Pandey, M.D. [Department of Civil and Environmental Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada)], E-mail: mdpandey@uwaterloo.ca
2009-02-15
Monitoring of degradation and predicting its progression using periodic inspection data are important to ensure safety and reliability of engineering systems. Traditional regression models are inadequate in modeling the periodic inspection data, as it ignores units specific random effects and potential correlation among repeated measurements. This paper presents an advanced nonlinear mixed-effects (NLME) model, generally adopted in bio-statistical literature, for modeling and predicting degradation in nuclear piping system. The proposed model offers considerable improvement by reducing the variance associated with degradation of a specific unit, which leads to more realistic estimates of risk.
Mei Guangyi
Full Text Available A systematic evaluation of nonlinear mixed-effect taper models for volume prediction was performed. Of 21 taper equations with fewer than 5 parameters each, the best 4-parameter fixed-effect model according to fitting statistics was then modified by comparing its values for the parameters total height (H, diameter at breast height (DBH, and aboveground height (h to modeling data. Seven alternative prediction strategies were compared using the best new equation in the absence of calibration data, which is often unavailable in forestry practice. The results of this study suggest that because calibration may sometimes be a realistic option, though it is rarely used in practical applications, one of the best strategies for improving the accuracy of volume prediction is the strategy with 7 calculated total heights of 3, 6 and 9 trees in the largest, smallest and medium-size categories, respectively. We cannot use the average trees or dominant trees for calculating the random parameter for further predictions. The method described here will allow the user to make the best choices of taper type and the best random-effect calculated strategy for each practical application and situation at tree level.
Guangyi, Mei; Yujun, Sun; Hao, Xu; de-Miguel, Sergio
2015-01-01
A systematic evaluation of nonlinear mixed-effect taper models for volume prediction was performed. Of 21 taper equations with fewer than 5 parameters each, the best 4-parameter fixed-effect model according to fitting statistics was then modified by comparing its values for the parameters total height (H), diameter at breast height (DBH), and aboveground height (h) to modeling data. Seven alternative prediction strategies were compared using the best new equation in the absence of calibration data, which is often unavailable in forestry practice. The results of this study suggest that because calibration may sometimes be a realistic option, though it is rarely used in practical applications, one of the best strategies for improving the accuracy of volume prediction is the strategy with 7 calculated total heights of 3, 6 and 9 trees in the largest, smallest and medium-size categories, respectively. We cannot use the average trees or dominant trees for calculating the random parameter for further predictions. The method described here will allow the user to make the best choices of taper type and the best random-effect calculated strategy for each practical application and situation at tree level.
A multilevel nonlinear mixed-effects approach to model growth in pigs
Strathe, Anders Bjerring; Danfær, Allan Christian; Sørensen, H
2009-01-01
Growth functions have been used to predict market weight of pigs and maximize return over feed costs. This study was undertaken to compare 4 growth functions and methods of analyzing data, particularly one that considers nonlinear repeated measures. Data were collected from an experiment with 40...... pigs maintained from birth to maturity and their BW measured weekly or every 2 wk up to 1,007 d. Gompertz, logistic, Bridges, and Lopez functions were fitted to the data and compared using information criteria. For each function, a multilevel nonlinear mixed effects model was employed because....... Furthermore, studies should consider adding continuous autoregressive process when analyzing nonlinear mixed models with repeated measures....
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase
Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A
2016-11-01
Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P < 0·001). The statistical model showed that bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure.
Bivariate Mixed Effects Analysis of Clustered Data with Large Cluster Sizes.
Zhang, Daowen; Sun, Jie Lena; Pieper, Karen
2016-10-01
Linear mixed effects models are widely used to analyze a clustered response variable. Motivated by a recent study to examine and compare the hospital length of stay (LOS) between patients undertaking percutaneous coronary intervention (PCI) and coronary artery bypass graft (CABG) from several international clinical trials, we proposed a bivariate linear mixed effects model for the joint modeling of clustered PCI and CABG LOS's where each clinical trial is considered a cluster. Due to the large number of patients in some trials, commonly used commercial statistical software for fitting (bivariate) linear mixed models failed to run since it could not allocate enough memory to invert large dimensional matrices during the optimization process. We consider ways to circumvent the computational problem in the maximum likelihood (ML) inference and restricted maximum likelihood (REML) inference. Particularly, we developed an expected and maximization (EM) algorithm for the REML inference and presented an ML implementation using existing software. The new REML EM algorithm is easy to implement and computationally stable and efficient. With this REML EM algorithm, we could analyze the LOS data and obtained meaningful results.
Hae Kyung Im
2012-02-01
Full Text Available The International HapMap project has made publicly available extensive genotypic data on a number of lymphoblastoid cell lines (LCLs. Building on this resource, many research groups have generated a large amount of phenotypic data on these cell lines to facilitate genetic studies of disease risk or drug response. However, one problem that may reduce the usefulness of these resources is the biological noise inherent to cellular phenotypes. We developed a novel method, termed Mixed Effects Model Averaging (MEM, which pools data from multiple sources and generates an intrinsic cellular growth rate phenotype. This intrinsic growth rate was estimated for each of over 500 HapMap cell lines. We then examined the association of this intrinsic growth rate with gene expression levels and found that almost 30% (2,967 out of 10,748 of the genes tested were significant with FDR less than 10%. We probed further to demonstrate evidence of a genetic effect on intrinsic growth rate by determining a significant enrichment in growth-associated genes among genes targeted by top growth-associated SNPs (as eQTLs. The estimated intrinsic growth rate as well as the strength of the association with genetic variants and gene expression traits are made publicly available through a cell-based pharmacogenomics database, PACdb. This resource should enable researchers to explore the mediating effects of proliferation rate on other phenotypes.
Wang, Wan-Lun; Lin, Tsung-I
2014-07-30
The multivariate nonlinear mixed-effects model (MNLMM) has emerged as an effective tool for modeling multi-outcome longitudinal data following nonlinear growth patterns. In the framework of MNLMM, the random effects and within-subject errors are assumed to be normally distributed for mathematical tractability and computational simplicity. However, a serious departure from normality may cause lack of robustness and subsequently make invalid inference. This paper presents a robust extension of the MNLMM by considering a joint multivariate t distribution for the random effects and within-subject errors, called the multivariate t nonlinear mixed-effects model. Moreover, a damped exponential correlation structure is employed to capture the extra serial correlation among irregularly observed multiple repeated measures. An efficient expectation conditional maximization algorithm coupled with the first-order Taylor approximation is developed for maximizing the complete pseudo-data likelihood function. The techniques for the estimation of random effects, imputation of missing responses and identification of potential outliers are also investigated. The methodology is motivated by a real data example on 161 pregnant women coming from a study in a private fertilization obstetrics clinic in Santiago, Chile and used to analyze these data.
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
A. Cabezas
2010-02-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting on three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of another factors as vegetation morphology should be included in further studies to explain small scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment solid concentration.
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Cabezas, A.; Angulo-Martínez, M.; Gonzalez-Sanchís, M.; Jimenez, J. J.; Comín, F. A.
2010-08-01
Sediment, Total Organic Carbon (TOC) and total nitrogen (TN) accumulation during one overbank flood (1.15 y return interval) were examined at one reach of the Middle Ebro River (NE Spain) for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m2 study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
ZHOU Jie; TANG Aiping; FENG Hailin
2016-01-01
The statistical inference for generalized mixed-effects state space models (MESSM) are investigated when the random effects are unknown.Two filtering algorithms are designed both of which are based on mixture Kalman filter.These algorithms are particularly useful when the longitudinal measurements are sparse.The authors also propose a globally convergent algorithm for parameter estimation of MESSM which can be used to locate the initial value of parameters for local while more efficient algorithms.Simulation examples are carried out which validate the efficacy of the proposed approaches.A data set from the clinical trial is investigated and a smaller mean square error is achieved compared to the existing results in literatures.
Kinetic mixing effect in the 3-3-1-1 model
Dong, P V
2015-01-01
We show that the mixing effect of the neutral gauge bosons in the 3-3-1-1 model comes from two sources. The first one is due to the 3-3-1-1 gauge symmetry breaking as usual, whereas the second one results from the kinetic mixing between the gauge bosons of U(1)_X and U(1)_N groups, which are used to determine the electric charge and baryon minus lepton numbers, respectively. Such mixings modify the \\rho-parameter and the known couplings of Z with fermions. The constraints that arise from flavor-changing neutral currents due to the gauge boson mixings and non-universal fermion generations are also given.
Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica
2015-02-01
The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.
Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data
Zhou, Lan
2010-03-01
Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.
Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W
2016-12-06
Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM(®) 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors.
Mixing effects in postdischarge modeling of electric discharge oxygen-iodine laser experiments
Palla, Andrew D.; Carroll, David L.; Verdeyen, Joseph T.; Solomon, Wayne C.
2006-07-01
In an electric discharge oxygen-iodine laser, laser action at 1315nm on the I(P1/22)→I(P3/22) transition of atomic iodine is obtained by a near resonant energy transfer from O2(aΔ1) which is produced using a low-pressure electric discharge. The discharge production of atomic oxygen, ozone, and other excited species adds higher levels of complexity to the postdischarge kinetics which are not encountered in a classic purely chemical O2(aΔ1) generation system. Mixing effects are also present. In this paper we present postdischarge modeling results obtained using a modified version of the BLAZE-II gas laser code. A 28 species, 105 reaction chemical kinetic reaction set for the postdischarge kinetics is presented. Calculations were performed to ascertain the impact of a two stream mixing mechanism on the numerical model and to study gain as a function of reactant mass flow rates. The calculations were compared with experimental data. Agreement with experimental data was improved with the addition of new kinetics and the mixing mechanism.
Interspecies mixed-effect pharmacokinetic modeling of penicillin G in cattle and swine.
Li, Mengjie; Gehring, Ronette; Tell, Lisa; Baynes, Ronald; Huang, Qingbiao; Riviere, Jim E
2014-08-01
Extralabel drug use of penicillin G in food-producing animals may cause an excess of residues in tissue which will have the potential to damage human health. Of all the antibiotics, penicillin G may have the greatest potential for producing allergic responses to the consumer of food animal products. There are, however, no population pharmacokinetic studies of penicillin G for food animals. The objective of this study was to develop a population pharmacokinetic model to describe the time-concentration data profile of penicillin G across two species. Data were collected from previously published pharmacokinetic studies in which several formulations of penicillin G were administered to diverse populations of cattle and swine. Liver, kidney, and muscle residue data were also used in this study. Compartmental models with first-order absorption and elimination were fit to plasma and tissue concentrations using a nonlinear mixed-effect modeling approach. A 3-compartment model with extra tissue compartments was selected to describe the pharmacokinetics of penicillin G. Typical population parameter estimates (interindividual variability) were central volumes of distribution of 3.45 liters (12%) and 3.05 liters (8.8%) and central clearance of 105 liters/h (32%) and 16.9 liters/h (14%) for cattle and swine, respectively, with peripheral clearance of 24.8 liters/h (13%) and 9.65 liters/h (23%) for cattle and 13.7 liters/h (85%) and 0.52 liters/h (40%) for swine. Body weight and age were the covariates in the final pharmacokinetic models. This study established a robust model of penicillin for a large and diverse population of food-producing animals which could be applied to other antibiotics and species in future analyses.
Interspecies Mixed-Effect Pharmacokinetic Modeling of Penicillin G in Cattle and Swine
Li, Mengjie; Gehring, Ronette; Tell, Lisa; Baynes, Ronald; Huang, Qingbiao
2014-01-01
Extralabel drug use of penicillin G in food-producing animals may cause an excess of residues in tissue which will have the potential to damage human health. Of all the antibiotics, penicillin G may have the greatest potential for producing allergic responses to the consumer of food animal products. There are, however, no population pharmacokinetic studies of penicillin G for food animals. The objective of this study was to develop a population pharmacokinetic model to describe the time-concentration data profile of penicillin G across two species. Data were collected from previously published pharmacokinetic studies in which several formulations of penicillin G were administered to diverse populations of cattle and swine. Liver, kidney, and muscle residue data were also used in this study. Compartmental models with first-order absorption and elimination were fit to plasma and tissue concentrations using a nonlinear mixed-effect modeling approach. A 3-compartment model with extra tissue compartments was selected to describe the pharmacokinetics of penicillin G. Typical population parameter estimates (interindividual variability) were central volumes of distribution of 3.45 liters (12%) and 3.05 liters (8.8%) and central clearance of 105 liters/h (32%) and 16.9 liters/h (14%) for cattle and swine, respectively, with peripheral clearance of 24.8 liters/h (13%) and 9.65 liters/h (23%) for cattle and 13.7 liters/h (85%) and 0.52 liters/h (40%) for swine. Body weight and age were the covariates in the final pharmacokinetic models. This study established a robust model of penicillin for a large and diverse population of food-producing animals which could be applied to other antibiotics and species in future analyses. PMID:24867969
Optimal composite scores for longitudinal clinical trials under the linear mixed effects model.
Ard, M Colin; Raghavan, Nandini; Edland, Steven D
2015-01-01
Clinical trials of chronic, progressive conditions use rate of change on continuous measures as the primary outcome measure, with slowing of progression on the measure as evidence of clinical efficacy. For clinical trials with a single prespecified primary endpoint, it is important to choose an endpoint with the best signal-to-noise properties to optimize statistical power to detect a treatment effect. Composite endpoints composed of a linear weighted average of candidate outcome measures have also been proposed. Composites constructed as simple sums or averages of component tests, as well as composites constructed using weights derived from more sophisticated approaches, can be suboptimal, in some cases performing worse than individual outcome measures. We extend recent research on the construction of efficient linearly weighted composites by establishing the often overlooked connection between trial design and composite performance under linear mixed effects model assumptions and derive a formula for calculating composites that are optimal for longitudinal clinical trials of known, arbitrary design. Using data from a completed trial, we provide example calculations showing that the optimally weighted linear combination of scales can improve the efficiency of trials by almost 20% compared with the most efficient of the individual component scales. Additional simulations and analytical results demonstrate the potential losses in efficiency that can result from alternative published approaches to composite construction and explore the impact of weight estimation on composite performance.
Julie A Simpson
Full Text Available The analysis of in vitro anti-malarial drug susceptibility testing is vulnerable to the effects of different statistical approaches and selection biases. These confounding factors were assessed with respect to pfmdr1 gene mutation and amplification in 490 clinical isolates. Two statistical approaches for estimating the drug concentration associated with 50% effect (EC50 were compared: the commonly used standard two-stage (STS method, and nonlinear mixed-effects modelling. The in vitro concentration-effect relationships for, chloroquine, mefloquine, lumefantrine and artesunate, were derived from clinical isolates obtained from patients on the western border of Thailand. All isolates were genotyped for polymorphisms in the pfmdr1 gene. The EC50 estimates were similar for the two statistical approaches but 15-28% of isolates in the STS method had a high coefficient of variation (>15% for individual estimates of EC50 and these isolates had EC50 values that were 32 to 66% higher than isolates derived with more precision. In total 41% (202/490 of isolates had amplification of pfmdr1 and single nucleotide polymorphisms were found in 50 (10%. Pfmdr1 amplification was associated with an increase in EC50 for mefloquine (139% relative increase in EC50 for 2 copies, 188% for 3+ copies, lumefantrine (82% and 75% for 2 and 3+ copies respectively and artesunate (63% and 127% for 2 and 3+ copies respectively. In contrast pfmdr1 mutation at codons 86 or 1042 were associated with an increase in chloroquine EC50 (44-48%. Sample size calculations showed that to demonstrate an EC50 shift of 50% or more with 80% power if the prevalence was 10% would require 430 isolates and 245 isolates if the prevalence was 20%. In conclusion, although nonlinear mixed-effects modelling did not demonstrate any major advantage for determining estimates of anti-malarial drug susceptibility, the method includes all isolates, thereby, potentially improving confirmation of candidate
Daichi Narushima
2016-03-01
Full Text Available Background: Spontaneous Reporting Systems (SRSs are passive systems composed of reports of suspected Adverse Drug Events (ADEs, and are used for Pharmacovigilance (PhV, namely, drug safety surveillance. Exploration of analytical methodologies to enhance SRS-based discovery will contribute to more effective PhV. In this study, we proposed a statistical modeling approach for SRS data to address heterogeneity by a reporting time point. Furthermore, we applied this approach to analyze ADEs of incretin-based drugs such as DPP-4 inhibitors and GLP-1 receptor agonists, which are widely used to treat type 2 diabetes. Methods: SRS data were obtained from the Japanese Adverse Drug Event Report (JADER database. Reported adverse events were classified according to the MedDRA High Level Terms (HLTs. A mixed effects logistic regression model was used to analyze the occurrence of each HLT. The model treated DPP-4 inhibitors, GLP-1 receptor agonists, hypoglycemic drugs, concomitant suspected drugs, age, and sex as fixed effects, while the quarterly period of reporting was treated as a random effect. Before application of the model, Fisher’s exact tests were performed for all drug-HLT combinations. Mixed effects logistic regressions were performed for the HLTs that were found to be associated with incretin-based drugs. Statistical significance was determined by a two-sided p-value <0.01 or a 99% two-sided confidence interval. Finally, the models with and without the random effect were compared based on Akaike’s Information Criteria (AIC, in which a model with a smaller AIC was considered satisfactory. Results: The analysis included 187,181 cases reported from January 2010 to March 2015. It showed that 33 HLTs, including pancreatic, gastrointestinal, and cholecystic events, were significantly associated with DPP-4 inhibitors or GLP-1 receptor agonists. In the AIC comparison, half of the HLTs reported with incretin-based drugs favored the random effect
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Yang, B.W.; Zhang, H.; Han, B.; Zha, Y.D.; Shan, J.Q. [Xi' an Jiaotong Univ. (China). School of Nuclear Science and Technology
2016-07-15
The thermal hydraulic characteristics of a mixing vane grid are largely dependent on the structure of key components, such as strip, spring, dimple, weld nugget, as well as the mixing vane configuration. In this paper, several types of spacer grids with different dimple shapes are modeled under subcooled boiling conditions. Prior to the application of CFD on the dimple shape analysis, the mixing effects of spacer grids were studied. After the dimple shape analysis, the side channel effect is discussed by comparing the simulation results of a 3 x 3 and a 5 x 5 spacer grid. The two phase flow CFD models in this study are validated through simple geometry showing that the calculated void fraction is in good agreement with the experimental data. The dimple comparison result shows that varying dimple structures can result in different temperatures, lateral velocities and void fraction distributions downstream of the spacer grids. Comparison of two sizes of spacer grids demonstrate that the side channel generates different flow distribution pattern in the center channel.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
Francisco Marco-Rius
Full Text Available Fish growth is commonly used as a proxy for fitness but this is only valid if individual growth variation can be interpreted in relation to conspecifics' performance. Unfortunately, assessing individual variation in growth rates is problematic under natural conditions because subjects typically need to be marked, repeated measurements of body size are difficult to obtain in the field, and recaptures may be limited to a few time events which will generally vary among individuals. The analysis of consecutive growth rings (circuli found on scales and other hard structures offers an alternative to mark and recapture for examining individual growth variation in fish and other aquatic vertebrates where growth rings can be visualized, but accounting for autocorrelations and seasonal growth stanzas has proved challenging. Here we show how mixed-effects modelling of scale growth increments (inter-circuli spacing can be used to reconstruct the growth trajectories of sea trout (Salmo trutta and correctly classify 89% of individuals into early or late seaward migrants (smolts. Early migrants grew faster than late migrants during their first year of life in freshwater in two natural populations, suggesting that migration into the sea was triggered by ontogenetic (intrinsic drivers, rather than by competition with conspecifics. Our study highlights the profound effects that early growth can have on age at migration of a paradigmatic fish migrant and illustrates how the analysis of inter-circuli spacing can be used to reconstruct the detailed growth of individuals when these cannot be marked or are only caught once.
Tango, Toshiro
2017-02-13
Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.
Ernest II, Charles Steven
2013-01-01
Despite the growing promise of pharmaceutical research, inferior experimentation or interpretation of data can inhibit breakthrough molecules from finding their way out of research institutions and reaching patients. This thesis provides evidence that better characterization of pre-clinical and clinical data can be accomplished using non-linear mixed effect modeling (NLMEM) and more effective experiments can be conducted using optimal design (OD). To demonstrate applicability of NLMEM and OD...
Hao Xu
2014-04-01
Full Text Available Tree height and diameter at breast height are two important forest factors. The best model from 23 height-diameter equations was selected as the basic model to fit the height-diameter relationships of Chinese fir with one level (sites or plots effects and nested two levels (nested effects of sites and plots Nonlinear Mixed Effects (NLME models. The best model was chosen by smaller Bias, RMSE and larger Radj2. Then the best random-effects combinations for the NLME models were determined by AIC, BIC and -2LL. The results showed that the basic model with three random effects parameters &Phi &Phi &Phi1 &Phi2 and &Phi3 was considered the best mixed model. The nested two levels NLME model considering heteroscedasticity structure (power function possessed with higher predictable accuracy and significantly improved model performance (LRT = 469.43, p<0.0001. The NLME model would be allowed for estimating accuracy the height-diameter relationships of Chinese fir and provided better height predictions than the models using only fixed-effects parameters.
The transition model test for serial dependence in mixed-effects models for binary data
Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders
2016-01-01
Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...... the targeted root mean squared error of approximation (TRSMEA) as a measure of the population misfit due to serial dependence....
The structure of self-reported emotional experiences : A mixed-effects Poisson factor model
Bockenholt, U; Kamakura, WA; Wedel, M
2003-01-01
Multivariate count data are commonly analysed by using Poisson distributions with varying intensity parameters, resulting in a random-effects model. In the analysis of a data set on the frequency of different emotion experiences we find that a Poisson model with a single random effect does not yield
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Quantifying Variation in Gait Features from Wearable Inertial Sensors Using Mixed Effects Models.
Cresswell, Kellen Garrison; Shin, Yongyun; Chen, Shanshan
2017-02-25
The emerging technology of wearable inertial sensors has shown its advantages in collecting continuous longitudinal gait data outside laboratories. This freedom also presents challenges in collecting high-fidelity gait data. In the free-living environment, without constant supervision from researchers, sensor-based gait features are susceptible to variation from confounding factors such as gait speed and mounting uncertainty, which are challenging to control or estimate. This paper is one of the first attempts in the field to tackle such challenges using statistical modeling. By accepting the uncertainties and variation associated with wearable sensor-based gait data, we shift our efforts from detecting and correcting those variations to modeling them statistically. From gait data collected on one healthy, non-elderly subject during 48 full-factorial trials, we identified four major sources of variation, and quantified their impact on one gait outcome-range per cycle-using a random effects model and a fixed effects model. The methodology developed in this paper lays the groundwork for a statistical framework to account for sources of variation in wearable gait data, thus facilitating informative statistical inference for free-living gait analysis.
Quantifying Variation in Gait Features from Wearable Inertial Sensors Using Mixed Effects Models
Kellen Garrison Cresswell
2017-02-01
Full Text Available The emerging technology of wearable inertial sensors has shown its advantages in collecting continuous longitudinal gait data outside laboratories. This freedom also presents challenges in collecting high-fidelity gait data. In the free-living environment, without constant supervision from researchers, sensor-based gait features are susceptible to variation from confounding factors such as gait speed and mounting uncertainty, which are challenging to control or estimate. This paper is one of the first attempts in the field to tackle such challenges using statistical modeling. By accepting the uncertainties and variation associated with wearable sensor-based gait data, we shift our efforts from detecting and correcting those variations to modeling them statistically. From gait data collected on one healthy, non-elderly subject during 48 full-factorial trials, we identified four major sources of variation, and quantified their impact on one gait outcome—range per cycle—using a random effects model and a fixed effects model. The methodology developed in this paper lays the groundwork for a statistical framework to account for sources of variation in wearable gait data, thus facilitating informative statistical inference for free-living gait analysis.
Jolling, Koen; Perez Ruixo, Juan Jose; Hemeryck, Alex; Vermeulen, An; Greway, Tony
2005-04-01
The aim of this study was to develop a population pharmacokinetic model for interspecies allometric scaling of pegylated r-HuEPO (PEG-EPO) pharmacokinetics to man. A total of 927 serum concentrations from 193 rats, 6 rabbits, 34 monkeys, and 9 dogs obtained after a single dose of PEG-EPO, administered by the i.v. (dose range: 12.5-550 microg/kg) and s.c. (dose range: 12.5-500 microg/kg) routes, were pooled in this analysis. An open two-compartment model with first-order absorption and lag time (Tlag) and linear elimination from the central compartment was fitted to the data using the NONMEM V software. Body weight (WT) was used as a scaling factor and the effect of brain weight (BW), sex, and pregnancy status on the pharmacokinetic parameters was investigated. The final model was evaluated by means of a non-parametric bootstrap analysis and used to predict the PEG-EPO pharmacokinetic parameters in healthy male subjects. The systemic clearance (CL) in males was estimated to be 4.08WT1.030xBW-0.345 ml/h. In females, the CL was 90.7% of the CL in males. The volumes of the central (Vc) and the peripheral (Vp) compartment were characterized as 57.8WT0.959 ml, and 48.1WT1.150 ml, respectively. Intercompartmental flow was estimated at 2.32WT0.930 ml/h. Absorption rate constant (Ka) was estimated at 0.0538WT-0.149. The absolute s.c. bioavailability F was calculated at 52.5, 80.2, and 49.4% in rat, monkey, and dog, respectively. The interindividual variability in the population pharmacokinetic parameters was fairly low (parametric bootstrap confirmed the accuracy of the NONMEM estimates. The mean model predicted pharmacokinetic parameters in healthy male subjects of 70 kg were estimated at: CL: 26.2 ml/h; Vc: 3.6l; Q: 286 l/h; Vp: 6.9l, and Ka: 0.031 h-1. The population pharmacokinetic model developed was appropriate to describe the time course of PEG-EPO serum concentrations and their variability in different species. The model predicted pharmacokinetics of PEG-EPO in
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria
Little Francesca
2010-02-01
Full Text Available Abstract Background Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Methods Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. Results A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10, and an empirical patient category were the co-variates that were found to improve the model. Conclusions A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12, and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up.
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
Xu Hao
Full Text Available A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb. Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1], a combination of first-order autoregressive and moving average structures [ARMA(1,1], and the compound symmetry structure (CS. Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR, the root mean square error (RMSE, and the adjusted coefficient of determination (adj-R2. For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Beloconi, Anton; Benas, Nikolaos; Chrysoulakis, Nektarios; Kamarianakis, Yiannis
2008-11-01
Linear mixed effects models were developed for the estimation of the average daily Particulate Matter (PM) concentration spatial distribution over the area of Greater London (UK). Both fine (PM2.5) and coarse (PM10) concentrations were predicted for the 2002- 2012 time period, based on satellite data. The latter included Aerosol Optical Thickness (AOT) at 3×3 km spatial resolution, as well as the Surface Relative Humidity, Surface Temperature and K-Index derived from MODIS (Moderate Resolution Imaging Spectroradiometer) sensor. For a meaningful interpretation of the association among these variables, all data were homogenized with regard to spatial support and geographic projection, thus addressing the change of support problem and leading to a valid statistical inference. To this end, spatial (2D) and spatio- temporal (3D) kriging techniques were applied to in-situ particulate matter concentrations and the leave-one- station-out cross-validation was performed on a daily level to gauge the quality of the predictions. Satellite- derived covariates displayed clear seasonal patterns; in order to work with data which is stationary in mean, for each covariate, deviations from its estimated annual profiles were computed using nonlinear least squares and nonlinear absolute deviations. High-resolution land- cover and morphology static datasets were additionally incorporated in the analysis in order to catch the effects of nearby emission sources and sequestration sites. For pairwise comparisons of the particulate matter concentration means at distinct land-cover classes, the pairwise comparisons method for unequal sample sizes, known as Tukey's method, was performed. The use of satellite-derived products allowed better assessment of space-time interactions of PM, since these daily spatial measurements were able to capture differences in PM concentrations between grid cells, while the use of high- resolution land-cover and morphology static datasets allowed accounting for
Hao Xu
Full Text Available An individual-tree diameter growth model was developed for Cunninghamia lanceolata in Fujian province, southeast China. Data were obtained from 72 plantation-grown China-fir trees in 24 single-species plots. Ordinary non-linear least squares regression was used to choose the best base model from among 5 theoretical growth equations; selection criteria were the smallest absolute mean residual and root mean square error and the largest adjusted coefficient of determination. To account for autocorrelation in the repeated-measures data, we developed one-level and nested two-level nonlinear mixed-effects (NLME models, constructed on the selected base model; the NLME models incorporated random effects of the tree and plot. The best random-effects combinations for the NLME models were identified by Akaike's information criterion, Bayesian information criterion and -2 logarithm likelihood. Heteroscedasticity was reduced with two residual variance functions, a power function and an exponential function. The autocorrelation was addressed with three residual autocorrelation structures: a first-order autoregressive structure [AR(1], a combination of first-order autoregressive and moving average structures [ARMA(1,1] and a compound symmetry structure (CS. The one-level (tree NLME model performed best. Independent validation data were used to test the performance of the models and to demonstrate the advantage of calibrating the NLME models.
A nonlinear mixed-effects model for simultaneous smoothing and registration of functional data
Raket, Lars Lau; Sommer, Stefan Horst; Markussen, Bo
2014-01-01
We consider misaligned functional data, where data registration is necessary for proper statistical analysis. This paper proposes to treat misalignment as a nonlinear random effect, which makes simultaneous likelihood inference for horizontal and vertical effects possible. By simultaneously fitting...
Higgs couplings and new signals from Flavon-Higgs mixing effects within multi-scalar models
Diaz-Cruz, J. Lorenzo; Saldaña-Salazar, Ulises J.
2016-12-01
Testing the properties of the Higgs particle discovered at the LHC and searching for new physics signals, are some of the most important tasks of Particle Physics today. Current measurements of the Higgs couplings to fermions and gauge bosons, seem consistent with the Standard Model, and when taken as a function of the particle mass, should lay on a single line. However, in models with an extended Higgs sector the diagonal Higgs couplings to up-quarks, down-quarks and charged leptons, could lay on different lines, while non-diagonal flavor-violating Higgs couplings could appear too. We describe these possibilities within the context of multi-Higgs doublet models that employ the Froggatt-Nielsen (FN) mechanism to generate the Yukawa hierarchies. Furthermore, one of the doublets can be chosen to be of the inert type, which provides a viable dark matter candidate. The mixing of the Higgs doublets with the flavon field, can provide plenty of interesting signals, including: i) small corrections to the couplings of the SM-like Higgs, ii) exotic signals from the flavon fields, iii) new signatures from the heavy Higgs bosons. These aspects are studied within a specific model with 3 + 1 Higgs doublets and a singlet FN field. Constraints on the model are derived from the study of K and D mixing and the Higgs search at the LHC. For last, the implications from the latter aforementioned constraints to the FCNC top decay t → ch are presented too.
An Overview of Mixed-Effects Statistical Models for Second Language Researchers
Cunnings, Ian
2012-01-01
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review…
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Rathbun, Stephen L; Shiffman, Saul
2016-03-01
Cigarette smoking is a prototypical example of a recurrent event. The pattern of recurrent smoking events may depend on time-varying covariates including mood and environmental variables. Fixed effects and frailty models for recurrent events data assume that smokers have a common association with time-varying covariates. We develop a mixed effects version of a recurrent events model that may be used to describe variation among smokers in how they respond to those covariates, potentially leading to the development of individual-based smoking cessation therapies. Our method extends the modified EM algorithm of Steele (1996) for generalized mixed models to recurrent events data with partially observed time-varying covariates. It is offered as an alternative to the method of Rizopoulos, Verbeke, and Lesaffre (2009) who extended Steele's (1996) algorithm to a joint-model for the recurrent events data and time-varying covariates. Our approach does not require a model for the time-varying covariates, but instead assumes that the time-varying covariates are sampled according to a Poisson point process with known intensity. Our methods are well suited to data collected using Ecological Momentary Assessment (EMA), a method of data collection widely used in the behavioral sciences to collect data on emotional state and recurrent events in the every-day environments of study subjects using electronic devices such as Personal Digital Assistants (PDA) or smart phones.
Fu, Liyong; Zhang, Huiru; Lu, Jun; Zang, Hao; Lou, Minghua; Wang, Guangxing
2015-01-01
In this study, an individual tree crown ratio (CR) model was developed with a data set from a total of 3134 Mongolian oak (Quercus mongolica) trees within 112 sample plots allocated in Wangqing Forest Bureau of northeast China. Because of high correlation among the observations taken from the same sampling plots, the random effects at levels of both blocks defined as stands that have different site conditions and plots were taken into account to develop a nested two-level nonlinear mixed-effect model. Various stand and tree characteristics were assessed to explore their contributions to improvement of model prediction. Diameter at breast height, plot dominant tree height and plot dominant tree diameter were found to be significant predictors. Exponential model with plot dominant tree height as a predictor had a stronger ability to account for the heteroskedasticity. When random effects were modeled at block level alone, the correlations among the residuals remained significant. These correlations were successfully reduced when random effects were modeled at both block and plot levels. The random effects from the interaction of blocks and sample plots on tree CR were substantially large. The model that took into account both the block effect and the interaction of blocks and sample plots had higher prediction accuracy than the one with the block effect and population average considered alone. Introducing stand density into the model through dummy variables could further improve its prediction. This implied that the developed method for developing tree CR models of Mongolian oak is promising and can be applied to similar studies for other tree species.
Kaneda, Kotaro; Han, Tae-Hyung
2009-09-01
Fentanyl is a commonly used analgesic and sedative for the burned in the operating theater as well as the burn care units. The aim of this study was to characterize fentanyl population pharmacokinetics in burns and to identify clinically significant covariates. Twenty adults, aged 37+/-3 years, with 49+/-4% (mean+/-S.E.) total body surface area burn, were enrolled at 17+/-3 days after the injury. Twenty non-burn adults served as controls. After an intravenous bolus of 200 mcg fentanyl, the plasma concentrations were sequentially determined up to 4.5 h. Concentration-time profiles were subjected to non-linear mixed effect modeling. Cardiac indices were estimated with esophageal Doppler monitor. Burned patients have higher cardiac index than the non-burned. Three-compartment model was the best fit. The volumes of distribution were considerably expanded in all three compartments (27.9 L vs. 63.4 L, 64.7 L vs. 92.9 L, 153 L vs. 301 L, respectively) compared to the non-burned. BURN was the single most important covariate significantly improving the model. The primary effect of burn trauma on fentanyl pharmacokinetics is substantially expanded volumes of distribution, i.e., dilutional. Difference in simulation, however, was insufficient to explain the augmented resistance to fentanyl, implying the importance of titrating analgesics to the clinical effect.
Ilker Ercanli
2015-06-01
Full Text Available Diameter at breast height (DBH is the simplest, most common and most important tree dimension in forest inventory and is closely correlated with wood volume, height and biomass. In this study, a number of linear and nonlinear models predicting diameter at breast height from stump diameter were developed and evaluated for Oriental beech (Fagus orientalisLipsky stands located in the forest region of Ayancık, in the northeast of Turkey. A set of 1,501 pairs of diameter at breast height-stump measurements, originating from 70 sample plots of even-aged Oriental beech stands, were used in this study. About 80 % of the otal data (1,160 trees in 55 sample plots was used to fit a number of linear and nonlinear model parameters; the remaining 341 trees in 15 sample plots were randomly reserved for model validation and calibration response. The power model data set was found to produce the most satisfactory fits with the Adjusted Coefficient of Determination, R2adj (0.990, Root Mean Square Error, RMSE (1.25, Akaike’s Information Criterion, AIC (3820.5, Schwarz’s Bayesian Information Criterion, BIC (3837.2, and Absolute Bias (1.25. The nonlinear mixed-effect modeling approach for power model with R2adj(0.993, AIC (3598, BIC (3610.1, Absolute Bias (0.73 and RMSE (1.04 provided much better fitting and precise predictions for DBH from stump diameter than the conventional nonlinear fixed effect model structures for this model. The calibration response including tree DBH and stump diameter measurements of the four largest trees in a calibrated sample plot in calibration produced the highest Bias, -5.31 %, and RMSE, -6.30 %, the greatest reduction percentage.
Joachim Almquist
Full Text Available The last decade has seen a rapid development of experimental techniques that allow data collection from individual cells. These techniques have enabled the discovery and characterization of variability within a population of genetically identical cells. Nonlinear mixed effects (NLME modeling is an established framework for studying variability between individuals in a population, frequently used in pharmacokinetics and pharmacodynamics, but its potential for studies of cell-to-cell variability in molecular cell biology is yet to be exploited. Here we take advantage of this novel application of NLME modeling to study cell-to-cell variability in the dynamic behavior of the yeast transcription repressor Mig1. In particular, we investigate a recently discovered phenomenon where Mig1 during a short and transient period exits the nucleus when cells experience a shift from high to intermediate levels of extracellular glucose. A phenomenological model based on ordinary differential equations describing the transient dynamics of nuclear Mig1 is introduced, and according to the NLME methodology the parameters of this model are in turn modeled by a multivariate probability distribution. Using time-lapse microscopy data from nearly 200 cells, we estimate this parameter distribution according to the approach of maximizing the population likelihood. Based on the estimated distribution, parameter values for individual cells are furthermore characterized and the resulting Mig1 dynamics are compared to the single cell times-series data. The proposed NLME framework is also compared to the intuitive but limited standard two-stage (STS approach. We demonstrate that the latter may overestimate variabilities by up to almost five fold. Finally, Monte Carlo simulations of the inferred population model are used to predict the distribution of key characteristics of the Mig1 transient response. We find that with decreasing levels of post-shift glucose, the transient
Almquist, Joachim; Bendrioua, Loubna; Adiels, Caroline Beck; Goksör, Mattias; Hohmann, Stefan; Jirstrand, Mats
2015-01-01
The last decade has seen a rapid development of experimental techniques that allow data collection from individual cells. These techniques have enabled the discovery and characterization of variability within a population of genetically identical cells. Nonlinear mixed effects (NLME) modeling is an established framework for studying variability between individuals in a population, frequently used in pharmacokinetics and pharmacodynamics, but its potential for studies of cell-to-cell variability in molecular cell biology is yet to be exploited. Here we take advantage of this novel application of NLME modeling to study cell-to-cell variability in the dynamic behavior of the yeast transcription repressor Mig1. In particular, we investigate a recently discovered phenomenon where Mig1 during a short and transient period exits the nucleus when cells experience a shift from high to intermediate levels of extracellular glucose. A phenomenological model based on ordinary differential equations describing the transient dynamics of nuclear Mig1 is introduced, and according to the NLME methodology the parameters of this model are in turn modeled by a multivariate probability distribution. Using time-lapse microscopy data from nearly 200 cells, we estimate this parameter distribution according to the approach of maximizing the population likelihood. Based on the estimated distribution, parameter values for individual cells are furthermore characterized and the resulting Mig1 dynamics are compared to the single cell times-series data. The proposed NLME framework is also compared to the intuitive but limited standard two-stage (STS) approach. We demonstrate that the latter may overestimate variabilities by up to almost five fold. Finally, Monte Carlo simulations of the inferred population model are used to predict the distribution of key characteristics of the Mig1 transient response. We find that with decreasing levels of post-shift glucose, the transient response of Mig1 tend
Tabatabai, Mohammad A.; Kengwoung-Keumo, Jean-Jacques; Eby, Wayne M.; Bae, Sejong; Guemmegne, Juliette T.; Manne, Upender; Fouad, Mona; Partridge, Edward E.; Singh, Karan P.
2014-01-01
Background The main purpose of this study was to model and analyze the dynamics of cervical cancer mortality rates for African American (Black) and White women residing in 13 states located in the eastern half of the United States of America from 1975 through 2010. Methods The cervical cancer mortality rates of the Surveillance, Epidemiology, and End Results (SEER) were used to model and analyze the dynamics of cervical cancer mortality. A longitudinal hyperbolastic mixed-effects type II model was used to model the cervical cancer mortality data and SAS PROC NLMIXED and Mathematica were utilized to perform the computations. Results Despite decreasing trends in cervical cancer mortality rates for both races, racial disparities in mortality rates still exist. In all 13 states, Black women had higher mortality rates at all times. The degree of disparities and pace of decline in mortality rates over time differed among these states. Determining the paces of decline over 36 years showed that Tennessee had the most rapid decline in cervical cancer mortality for Black women, and Mississippi had the most rapid decline for White Women. In contrast, slow declines in cervical cancer mortality were noted for Black women in Florida and for White women in Maryland. Conclusions In all 13 states, cervical cancer mortality rates for both racial groups have fallen. Disparities in the pace of decline in mortality rates in these states may be due to differences in the rates of screening for cervical cancers. Of note, the gap in cervical cancer mortality rates between Black women and White women is narrowing. PMID:25226583
Jarvis, Gavin E; Barbosa, Roseli; Thompson, Andrew J
2016-03-01
Citral, eucalyptol, and linalool are widely used as flavorings, fragrances, and cosmetics. Here, we examined their effects on electrophysiological and binding properties of human 5-HT3 receptors expressed in Xenopus oocytes and human embryonic kidney 293 cells, respectively. Data were analyzed using nonlinear mixed-effects modeling to account for random variance in the peak current response between oocytes. The oils caused an insurmountable inhibition of 5-HT-evoked currents (citral IC50 = 120 µM; eucalyptol = 258 µM; linalool = 141 µM) and did not compete with fluorescently labeled granisetron, suggesting a noncompetitive mechanism of action. Inhibition was not use-dependent but required a 30-second preapplication. Compound washout caused a slow (∼180 seconds) but complete recovery. Coapplication of the oils with bilobalide or diltiazem indicated they did not bind at the same locations as these channel blockers. Homology modeling and ligand docking predicted binding to a transmembrane cavity at the interface of adjacent subunits. Liquid chromatography coupled to mass spectrometry showed that an essential oil extracted from Lippia alba contained 75.9% citral. This inhibited expressed 5-HT3 receptors (IC50 = 45 µg ml(-1)) and smooth muscle contractions in rat trachea (IC50 = 200 µg ml(-1)) and guinea pig ileum (IC50 = 20 µg ml(-1)), providing a possible mechanistic explanation for why this oil has been used to treat gastrointestinal and respiratory ailments. These results demonstrate that citral, eucalyptol, and linalool inhibit 5-HT3 receptors, and their binding to a conserved cavity suggests a valuable target for novel allosteric modulators.
Berglund, Martin; Sunnåker, Mikael; Adiels, Martin; Jirstrand, Mats; Wennberg, Bernt
2012-12-01
Non-linear mixed effects (NLME) models represent a powerful tool to simultaneously analyse data from several individuals. In this study, a compartmental model of leucine kinetics is examined and extended with a stochastic differential equation to model non-steady-state concentrations of free leucine in the plasma. Data obtained from tracer/tracee experiments for a group of healthy control individuals and a group of individuals suffering from diabetes mellitus type 2 are analysed. We find that the interindividual variation of the model parameters is much smaller for the NLME models, compared to traditional estimates obtained from each individual separately. Using the mixed effects approach, the population parameters are estimated well also when only half of the data are used for each individual. For a typical individual, the amount of free leucine is predicted to vary with a standard deviation of 8.9% around a mean value during the experiment. Moreover, leucine degradation and protein uptake of leucine is smaller, proteolysis larger and the amount of free leucine in the body is much larger for the diabetic individuals than the control individuals. In conclusion, NLME models offers improved estimates for model parameters in complex models based on tracer/tracee data and may be a suitable tool to reduce data sampling in clinical studies.
Yukawa, M; Yukawa, E; Suematsu, F; Takiguchi, T; Ikeda, H; Aki, H; Mimemoto, M
2011-12-01
Optimal use of phenobarbital in the neonatal population requires information regarding the drug's pharmacokinetics and the influence of various factors, such as different routes of administration, on the drug's disposition. However, because of sampling restrictions, it is often difficult to perform traditional pharmacokinetic studies in neonates and infants. This study was conducted to establish the role of patient characteristics in estimating doses of phenobarbital for neonates and infants using routine therapeutic drug monitoring data. The population pharmacokinetics of phenobarbital was evaluated using 109 serum concentration measurements obtained from routine phenobarbital monitoring of 70 neonates and infants. The data were analysed using the non-linear mixed effects model. A one-compartment pharmacokinetic model with first-order elimination was used. Covariates screened were current total bodyweight (TBW), gestational age, postnatal age (PNA), post-conceptional age, gender and neonates-infants clearance factor (serum concentration of phenobarbital; Conc). The final pharmacokinetic parameters were CL/F (mL/h) = (5.95.TBW (kg) +1.41.PNA (weeks)) Conc (serum phenobarbital concentration >50 μg/mL)(-0.221),Vd/F(L) =1.01.TBW (kg), and F = 0.483 for oral administration and F = 1 was assumed for suppository. Conc(-0.221) is 1 for phenobarbital concentration phenobarbital clearance in this study were TBW, PNA and Conc. Phenobarbital clearance increases proportionately with increasing TBW, and an older newborn was expected to have a higher rate of clearance than a younger newborn of equal bodyweight. Moreover, the clearance of phenobarbital decreased nonlinearly with increasing serum concentration of phenobarbital >50 μg/mL (Conc(-0.221) ). We developed a new model for neonate and infant dosing of phenobarbital with good predictive performance. Clinical application of our model should permit more accurate selection of initial and maintenance doses to achieve
López-López, José Antonio; Botella, Juan; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio
2013-01-01
Since heterogeneity between reliability coefficients is usually found in reliability generalization studies, moderator analyses constitute a crucial step for that meta-analytic approach. In this study, different procedures for conducting mixed-effects meta-regression analyses were compared. Specifically, four transformation methods for the…
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Johanna Petersen
Full Text Available Time out-of-home has been linked with numerous health outcomes, including cognitive decline, poor physical ability and low emotional state. Comprehensive characterization of this important health metric would potentially enable objective monitoring of key health outcomes. The objective of this study is to determine the relationship between time out-of-home and cognitive status, physical ability and emotional state.Participants included 85 independent older adults, age 65-96 years (M = 86.36; SD = 6.79 who lived alone, from the Intelligent Systems for Assessing Aging Changes (ISAAC and the ORCATECH Life Laboratory cohorts. Factors hypothesized to affect time out-of-home were assessed on three different temporal levels: yearly (cognitive status, loneliness, clinical walking speed, weekly (pain and mood or daily (time out-of-home, in-home walking speed, weather, and season. Subject characteristics including age, race, and gender were assessed at baseline. Total daily time out-of-home in hours was assessed objectively and unobtrusively for up to one year using an in-home activity sensor platform. A longitudinal tobit mixed effects regression model was used to relate daily time out-of-home to cognitive status, physical ability and emotional state. More hours spend outside the home was associated with better cognitive function as assessed using the Clinical Dementia Rating (CDR Scale, where higher scores indicate lower cognitive function (βCDR = -1.69, p<0.001. More hours outside the home was also associated with superior physical ability (βPain = -0.123, p<0.001 and improved emotional state (βLonely = -0.046, p<0.001; βLow mood = -0.520, p<0.001. Weather, season, and weekday also affected the daily time out-of-home.These results suggest that objective longitudinal monitoring of time out-of-home may enable unobtrusive assessment of cognitive, physical and emotional state. In addition, these results indicate that the factors affecting out
Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean
2016-08-01
Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode
2009-01-01
likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODES) with an observation link that incorporates noise. This state-space formulation only......The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model...... development, J. Pharmacokinet. Pharmacodyn. 32 (February(l)) (2005) 109-141; C.W. Tornoe, R.V Overgaard, H. Agerso, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8...
线性混合效应模型的惩罚变量选择%Variable Selection for Linear Mixed Effects Model Via Penalization Approaches
陈峰
2014-01-01
目的研究基于惩罚的线性混合效应模型变量选择原理和方法。方法对线性混合效应模型中的固定效应施加惩罚，采用Lasso和SCAD进行变量选择，通过两步迭代算法估计惩罚似然，利用BIC原则选择惩罚参数。通过广泛的模拟研究评价Lasso和SCAD在线性混合效应模型变量选择中的性质表现，并应用于真实数据的数量性状位点选择。结果模拟研究和实际应用显示，在线性混合效应模型中，两步迭代算法简单可行，基于惩罚的变量选择方法能够有效识别有意义的协变量。结论基于惩罚的策略为线性混合效应模型提供了行之有效的变量选择方法。%Objective To investigate variable selection approaches for linear mixed effects model via penalization-based strategies. Method Lasso and SCAD were used to select important variables for linear mixed effects model, a new two-step iteration algorithm was developed to maximize the penalized likelihood, and the penalization parameter was chosen via the BIC procedure. Extensive simulations were implemented to evaluate the performance of the proposed approaches for variable selection. An application to quantitative trait loci was given to demonstrate these penalization approaches. Results Simulations and application have shown that the proposed two-step iteration algorithm is effective and feasible for maximization the penalized likelihood and the penalization-based methods are promising and powerful approaches for variable selection of linear mixed effects model. Conclusions Penalization-based strategies are powerful approaches for variable selection of linear mixed effects model.
Tornøe, Christoffer Wenzel; Agersø, Henrik; Nielsen, Henrik Aalborg
2004-01-01
In this paper, the two non-linear mixed-effects programs NONMEM and NLME were compared for their use in population pharmacokinetic/pharmacodynamic (PK/PD) modelling. We have described the first-order conditional estimation (FOCE) method as implemented in NONMEM and the alternating algorithm in NLME...... proposed by Lindstrom and Bates. The two programs were tested using clinical PK/PD data of a new gonadotropin-releasing hormone (GnRH) antagonist degarelix currently being developed for prostate cancer treatment. The pharmacokinetics of intravenous administered degarelix was analysed using a three...
Lin, Zhoumeng; Cuneo, Matthew; Rowe, Joan D.; Li, Mengjie; Tell, Lisa A; Allison, Shayna; Carlson, Jan; Riviere, Jim E.; Gehring, Ronette
2016-01-01
Background Extra-label use of tulathromycin in lactating goats is common and may cause violative residues in milk. The objective of this study was to develop a nonlinear mixed-effects pharmacokinetic (NLME-PK) model to estimate tulathromycin depletion in plasma and milk of lactating goats. Eight lactating goats received two subcutaneous injections of 2.5 mg/kg tulathromycin 7 days apart; blood and milk samples were analyzed for concentrations of tulathromycin and the common fragment of tulath...
Kratochwil, Nicole A; Meille, Christophe; Fowler, Stephen; Klammers, Florian; Ekiciler, Aynur; Molitor, Birgit; Simon, Sandrine; Walter, Isabelle; McGinnis, Claudia; Walther, Johanna; Leonard, Brian; Triyatni, Miriam; Javanbakht, Hassan; Funk, Christoph; Schuler, Franz; Lavé, Thierry; Parrott, Neil J
2017-03-01
Early prediction of human clearance is often challenging, in particular for the growing number of low-clearance compounds. Long-term in vitro models have been developed which enable sophisticated hepatic drug disposition studies and improved clearance predictions. Here, the cell line HepG2, iPSC-derived hepatocytes (iCell®), the hepatic stem cell line HepaRG™, and human hepatocyte co-cultures (HμREL™ and HepatoPac®) were compared to primary hepatocyte suspension cultures with respect to their key metabolic activities. Similar metabolic activities were found for the long-term models HepaRG™, HμREL™, and HepatoPac® and the short-term suspension cultures when averaged across all 11 enzyme markers, although differences were seen in the activities of CYP2D6 and non-CYP enzymes. For iCell® and HepG2, the metabolic activity was more than tenfold lower. The micropatterned HepatoPac® model was further evaluated with respect to clearance prediction. To assess the in vitro parameters, pharmacokinetic modeling was applied. The determination of intrinsic clearance by nonlinear mixed-effects modeling in a long-term model significantly increased the confidence in the parameter estimation and extended the sensitive range towards 3% of liver blood flow, i.e., >10-fold lower as compared to suspension cultures. For in vitro to in vivo extrapolation, the well-stirred model was used. The micropatterned model gave rise to clearance prediction in man within a twofold error for the majority of low-clearance compounds. Further research is needed to understand whether transporter activity and drug metabolism by non-CYP enzymes, such as UGTs, SULTs, AO, and FMO, is comparable to the in vivo situation in these long-term culture models.
Bazzoli, Caroline; Retout, Sylvie; Mentré, France
2009-06-30
We focus on the Fisher information matrix used for design evaluation and optimization in nonlinear mixed effects multiple response models. We evaluate the appropriateness of its expression computed by linearization as proposed for a single response model. Using a pharmacokinetic-pharmacodynamic (PKPD) example, we first compare the computation of the Fisher information matrix with approximation to one derived from the observed matrix on a large simulation using the stochastic approximation expectation-maximization algorithm (SAEM). The expression of the Fisher information matrix for multiple responses is also evaluated by comparison with the empirical information obtained through a replicated simulation study using the first-order linearization estimation methods implemented in the NONMEM software (first-order (FO), first-order conditional estimate (FOCE)) and the SAEM algorithm in the MONOLIX software. The predicted errors given by the approximated information matrix are close to those given by the information matrix obtained without linearization using SAEM and to the empirical ones obtained with FOCE and SAEM. The simulation study also illustrates the accuracy of both FOCE and SAEM estimation algorithms when jointly modelling multiple responses and the major limitations of the FO method. This study highlights the appropriateness of the approximated Fisher information matrix for multiple responses, which is implemented in PFIM 3.0, an extension of the R function PFIM dedicated to design evaluation and optimization. It also emphasizes the use of this computing tool for designing population multiple response studies, as for instance in PKPD studies or in PK studies including the modelling of the PK of a drug and its active metabolite.
Boudiaf, Naïla; Laboissière, Rafael; Cousin, Émilie; Fournet, Nathalie; Krainik, Alexandre; Baciu, Monica
2016-11-24
The effect of normal aging on lexical production and semantic processing was evaluated in 72 healthy participants. Four tasks were used, picture naming (PN), picture categorization (PC), numerical judgment (NJ), and color judgment (CJ). The dependence of reaction time (RT) and correct responses with age was accounted by mixed-effects models. Participants underwent neuropsychological testing for verbal, executive, and memory functions. The RTs increase significantly with age for all tasks. After parceling out the non-specific cognitive decline, as reflected by the NJ task, the RT for the PN task decreases with age. Behavioral data were interpreted in relation with neuropsychological scores. Our results suggest that (a) naming becomes more automatic and semantic processing slightly more difficult with age, and (b) a non-specific general slowdown of cognitive processing occurs with age. Lexical production remained unaltered, based on compensatory automatic processes. This study also suggests a possible slowdown of semantic processing, even in normal aging.
Jun Diao
2014-11-01
Full Text Available Allometric models of internodes are an important component of Functional-Structural Plant Models (FSPMs, which represent the shape of internodes in tree architecture and help our understanding of resource allocation in organisms. Constant allometry is always assumed in these models. In this paper, multilevel nonlinear mixed-effect models were used to characterize the variability of internode allometry, describing the relationship between the last internode length and biomass of Pinus tabulaeformis Carr. trees within the GreenLab framework. We demonstrated that there is significant variability in allometric relationships at the tree and different-order branch levels, and the variability decreases among levels from trees to first-order branches and, subsequently, to second-order branches. The variability was partially explained by the random effects of site characteristics, stand age, density, and topological position of the internode. Tree- and branch-level-specific allometric models are recommended because they produce unbiased and accurate internode length estimates. The model and method developed in this study are useful for understanding and describing the structure and functioning of trees.
Ganapathisubramanian, N.
1991-08-01
The iodate-As(III) system which exhibits bistability in an ideal continuous flow stirred tank reactor (CSTR), exhibits tristability when subjected to the mixing model of Kumpinsky and Epstein [J. Chem. Phys. 82, 53 (1985)]. The cross flow between the major and minor reactors influences the system's lower hysteresis limit more than its upper hysteresis limit.
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
Wang, Shudong; Jiao, Hong; Jin, Ying; Thum, Yeow Meng
2010-01-01
The vertical scales of large-scale achievement tests created by using item response theory (IRT) models are mostly based on cluster (or correlated) educational data in which students usually are clustered in certain groups or settings (classrooms or schools). While such application directly violated assumption of independent sample of person in…
Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011–2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China. PMID:28225777
Erik Olofsen
2015-07-01
Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models
申宁宁; 房瑞玲; 高宇钊; 李少琼; 张军锋; 刘桂芬
2015-01-01
目的阐明马尔可夫链蒙特卡罗（MCMC）多重填补与重复测量资料混合效应线性模型分析的原理，完成纵向监测数据缺失模型的软件实现。方法根据222例高血压患者纵向监测的完全数据，产生缺失比例为18.92％的随机缺失数据集。应用MCMC多重填补方法，进行缺失值填补的模拟研究以及实例分析，并实现重复测量混合效应线性模型分析。结果模拟研究和实例分析表明，样本例数200，缺失比例20％，MCMC法多重填补5次所得结果最稳健；填补前缺失数据与完全数据的混合效应模型分析结果不同，填补后完整数据与完全数据的混合效应模型分析结果相同。结论 MCMC多重填补可以充分利用缺失资料信息，是处理缺失数据模型分析的有效方法之一；针对出现缺失的重复测量资料，结合应用混合效应模型与MCMC多重填补2种方法，从而得出更为符合客观实际的结果。%Objective To investigate the mechanisms of Markov Chain Monte Carlo (MCMC) multiple imputa-tion and mixed-effects linear model analysis of repeated measurement data, and to achieve software implementation of longitudinal monitoring on missing data model. Methods Depending on 222 longitudinal monitored complete data of patients with hypertension, a ratio of 18.92% random missing data set was generated. MCMC multiple imputation was used for simulation study and case study on multiple imputation of the missing value, and to achieve mixed-effects linear model analysis of repeated measurement data. Results The simulation study and case study showed that the findings of five-times MCMC multiple imputation on 200 samples with 20%missing data were the most reliable. The findings of mixed-effects model analysis between the missing data and the complete data were different before imputa-tion, which were same after imputation. Conclusion MCMC multiple imputation can take full advantage of the infor
Yang, Yao Bin; Swithenbank, Jim
2008-01-01
Packed bed combustion is still the most common way to burn municipal solid wastes. In this paper, a dispersion model for particle mixing, mainly caused by the movement of the grate in a moving-burning bed, has been proposed and transport equations for the continuity, momentum, species, and energy conservation are described. Particle-mixing coefficients obtained from model tests range from 2.0x10(-6) to 3.0x10(-5)m2/s. A numerical solution is sought to simulate the combustion behaviour of a full-scale 12-tonne-per-h waste incineration furnace at different levels of bed mixing. It is found that an increase in mixing causes a slight delay in the bed ignition but greatly enhances the combustion processes during the main combustion period in the bed. A medium-level mixing produces a combustion profile that is positioned more at the central part of the combustion chamber, and any leftover combustible gases (mainly CO) enter directly into the most intensive turbulence area created by the opposing secondary-air jets and thus are consumed quickly. Generally, the specific arrangement of the impinging secondary-air jets dumps most of the non-uniformity in temperature and CO into the gas flow coming from the bed-top, while medium-level mixing results in the lowest CO emission at the furnace exit and the highest combustion efficiency in the bed.
Bukoski, Jacob J.; Broadhead, Jeremy S.; Donato, Daniel C.; Murdiyarso, Daniel; Gregoire, Timothy G.
2017-01-01
Mangroves provide extensive ecosystem services that support local livelihoods and international environmental goals, including coastal protection, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects seeking to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through field inventories. To streamline C quantification in mangrove conservation projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We compile datasets of mangrove biomass C (197 observations from 48 sites) and soil organic C (99 observations from 27 sites) to parameterize the predictive models, and use linear mixed effect models to model the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, which are found to explain a substantial proportion of variance within the estimation datasets and indicate significant heterogeneity across-sites within the region. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm3 (14.1% of mean soil C). The results point to a need for standardization of forest metrics to facilitate meta-analyses, as well as provide important considerations for refining ecosystem C stock models in mangroves. PMID:28068361
Bukoski, Jacob J; Broadhead, Jeremy S; Donato, Daniel C; Murdiyarso, Daniel; Gregoire, Timothy G
2017-01-01
Mangroves provide extensive ecosystem services that support local livelihoods and international environmental goals, including coastal protection, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects seeking to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through field inventories. To streamline C quantification in mangrove conservation projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We compile datasets of mangrove biomass C (197 observations from 48 sites) and soil organic C (99 observations from 27 sites) to parameterize the predictive models, and use linear mixed effect models to model the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, which are found to explain a substantial proportion of variance within the estimation datasets and indicate significant heterogeneity across-sites within the region. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm3 (14.1% of mean soil C). The results point to a need for standardization of forest metrics to facilitate meta-analyses, as well as provide important considerations for refining ecosystem C stock models in mangroves.
Correlated Data Analysis Modeling, Analytics, and Applications
Song, Peter X-K
2007-01-01
Presents developments in correlated data analysis. This book provides a systematic treatment for the topic of estimating functions. In addition to marginal models and mixed-effects models, it covers topics on joint regression analysis based on Gaussian copulas and generalized state space models for longitudinal data from long time series.
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique
2012-01-01
Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285
Ma, Zongwei; Liu, Yang; Zhao, Qiuyue; Liu, Miaomiao; Zhou, Yuanchun; Bi, Jun
2016-05-01
Satellite remotely sensed aerosol optical depth (AOD) provides an effective way to fill the spatial and temporal gaps left by ground PM2.5 monitoring network. Previous studies have established robust advanced statistical models to estimate PM2.5 using AOD data in China. However, their coarse resolutions (˜10 km or greater) of PM2.5 estimations are not enough to support the health effect studies at urban scales. In this study, 3 km AOD data from Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 products were used to estimate the high resolution PM2.5 concentrations in Yangtze Delta Region of China. We proposed a nested linear mixed effects (LME) model including nested month-, week-, and day-specific random effects of PM2.5-AOD relationships. Validation results show that the LME model only with day-specific random effects (non-nested model) used in previous studies has poor performance in the days without PM2.5-AOD matchups (the R2 of day-of-year-based cross validation (DOY-based CV) is 0.148). The results also show that our nested model cannot improve the performance of non-nested model in the days with PM2.5-AOD matchups (sample-based CV R2 = 0.671 for nested model vs. 0.661 for non-nested model), but can greatly improve the model performance beyond those days (DOY-based CV R2 = 0.339 for nested model vs. 0.148 for non-nested model). To further improve the model performance, we applied the "buffer models" (i.e., models fitted from datasets which ground PM2.5 were matched with the average AOD values within certain radius buffer zones of gridded PM2.5 data) on the 3 km AOD data since the "buffer models" has more days with PM2.5-AOD matchups and can provide more day-specific relationships. The results of this study show that 3 km MODIS C6 AOD data can be used to estimate PM2.5 concentrations and can provide more detailed spatial information for urban scale studies. The application of our nested LME model can greatly improve the accuracy of 3 km PM2
李耀翔; 姜立春
2013-01-01
以黑龙江省七台河市林业局金沙林场9株人工落叶松432个样品密度数据为例,利用逐步回归技术构建落叶松木材密度模型:WD=β1+β2RN+β3RN2+β4h.利用S-PLUS软件中的LME过程,分别考虑单水平和多水平效应,拟合线性木材密度混合效应模型.结果表明:基于单水平和多水平效应的混合模型拟合精度高于传统的基本模型,并且考虑单水平树高效应和2层次效应时的混合模型精度高于考虑单水平样木效应影响的混合模型.模型检验结果表明:混合效应模型不但能反映总体平均木材密度变化趋势,还能反映分组之间的差异.%In this study,the sample data was based on 432 samples of 9 trees from dahurian larch(Larix gmelinii) plantations located in Qitaihe Forest Bureau in Heilongjiang Province.The stepwise regression techniques were used to develop wood density model:WD =β1 +β2RN +β3RN2 +β4h.Then,the developed model was fitted using single level and multilevel linear mixed-effects modeling approach based on LME procedure of S-PLUS software.The mixed effects models showed better model fitting results than basic model whatever considering single level and multilevel linear mixed effects.Moreover,the mixed effects model considering height effects and both effects showed more precision than that considering individual tree effects.Model test indicated that mixed effects models not only showed the mean trends of wood density,but also showed the variations among groups.
Xu, Wangli; Zhou, Haibo
2012-09-01
Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.
Gabriela A. Buqui
2015-06-01
Full Text Available AbstractVicenin-2 (apigenin-6,8-di-C-β-d-glucopyranoside is present in hydroalcoholic extracts of the Brazilian species Lychnophora ericoides Mart., Asteraceae, leaves, and the biological effects of this compound have been demonstrated including anti-inflammatory, antioxidant and anti-tumor effects in rat models. Given the potential of this compound as a pharmacological agent, the aims of this investigation were to evaluate the extent of intestinal absorption of vicenin-2, and to determine the intestinal permeation profile using an in situ single-pass intestinal perfusion technique. A validated HPLC–UV method was applied to measure the amount of unabsorbed vicenin-2 in the gut after an oral administration of 180 mg kg-1 in five rats. A nonlinear mixed effects model was used to determine the absorption pharmacokinetic parameters assuming a first order absorption and active secretion processes for this compound, wherein the active secretion was characterized by a zero-order process. The population pharmacokinetic parameters obtained were 0.274 min-1 for the first-order absorption rate constant, 16.3% min-1 for the zero-order rate constant; the final percentage of the original dose that was absorbed in vivo was 40.2 ± 2.5%. These parameters indicated that vicenin-2 was rapidly absorbed in the small intestine. In contrast to literature information indicating no absorption of vicenin-2 in Caco-2 cells, our results suggested that vicenin-2 can be absorbed in the small intestine of rats. The finding supports further investigation of vicenin-2 as a viable oral phytopharmaceutical agent for digestive diseases.
Lin, Zhoumeng; Cuneo, Matthew; Rowe, Joan D; Li, Mengjie; Tell, Lisa A; Allison, Shayna; Carlson, Jan; Riviere, Jim E; Gehring, Ronette
2016-11-18
Extra-label use of tulathromycin in lactating goats is common and may cause violative residues in milk. The objective of this study was to develop a nonlinear mixed-effects pharmacokinetic (NLME-PK) model to estimate tulathromycin depletion in plasma and milk of lactating goats. Eight lactating goats received two subcutaneous injections of 2.5 mg/kg tulathromycin 7 days apart; blood and milk samples were analyzed for concentrations of tulathromycin and the common fragment of tulathromycin (i.e., the marker residue CP-60,300), respectively, using liquid chromatography mass spectrometry. Based on these new data and related literature data, a NLME-PK compartmental model with first-order absorption and elimination was used to model plasma concentrations and cumulative excreted amount in milk. Monte Carlo simulations with 100 replicates were performed to predict the time when the upper limit of the 95% confidence interval of milk concentrations was below the tolerance. All animals were healthy throughout the study with normal appetite and milk production levels, and with mild-moderate injection-site reactions that diminished by the end of the study. The measured data showed that milk concentrations of the marker residue of tulathromycin were below the limit of detection (LOD = 1.8 ng/ml) 39 days after the second injection. A 2-compartment model with milk as an excretory compartment best described tulathromycin plasma and CP-60,300 milk pharmacokinetic data. The model-predicted data correlated with the measured data very well. The NLME-PK model estimated that tulathromycin plasma concentrations were below LOD (1.2 ng/ml) 43 days after a single injection, and 62 days after the second injection with a 95% confidence. These estimated times are much longer than the current meat withdrawal time recommendation of 18 days for tulathromycin in non-lactating cattle. The results suggest that twice subcutaneous injections of 2.5 mg/kg tulathromycin are a clinically
Backhans Mona
2012-11-01
Full Text Available Abstract Background Gender differences in mortality vary widely between countries and over time, but few studies have examined predictors of these variations, apart from smoking. The aim of this study is to investigate the link between gender policy and the gender gap in cause-specific mortality, adjusted for economic factors and health behaviours. Methods 22 OECD countries were followed 1973–2008 and the outcomes were gender gaps in external cause and circulatory disease mortality. A previously found country cluster solution was used, which includes indicators on taxes, parental leave, pensions, social insurances and social services in kind. Male breadwinner countries were made reference group and compared to earner-carer, compensatory breadwinner, and universal citizen countries. Specific policies were also analysed. Mixed effect models were used, where years were the level 1-units, and countries were the level 2-units. Results Both the earner-carer cluster (ns after adjustment for GDP and policies characteristic of that cluster are associated with smaller gender differences in external causes, particularly due to an association with increased female mortality. Cluster differences in the gender gap in circulatory disease mortality are the result of a larger relative decrease of male mortality in the compensatory breadwinner cluster and the earner-carer cluster. Policies characteristic of those clusters were however generally related to increased mortality. Conclusion Results for external cause mortality are in concordance with the hypothesis that women become more exposed to risks of accident and violence when they are economically more active. For circulatory disease mortality, results differ depending on approach – cluster or indicator. Whether cluster differences not explained by specific policies reflect other welfare policies or unrelated societal trends is an open question. Recommendations for further studies are made.
Nitipong Homwong
Full Text Available Rotaviruses (RV are important causes of diarrhea in animals, especially in domestic animals. Of the 9 RV species, rotavirus A, B, and C (RVA, RVB, and RVC, respectively had been established as important causes of diarrhea in pigs. The Minnesota Veterinary Diagnostic Laboratory receives swine stool samples from North America to determine the etiologic agents of disease. Between November 2009 and October 2011, 7,508 samples from pigs with diarrhea were submitted to determine if enteric pathogens, including RV, were present in the samples. All samples were tested for RVA, RVB, and RVC by real time RT-PCR. The majority of the samples (82% were positive for RVA, RVB, and/or RVC. To better understand the risk factors associated with RV infections in swine diagnostic samples, three-level mixed-effects logistic regression models (3L-MLMs were used to estimate associations among RV species, age, and geographical variability within the major swine production regions in North America. The conditional odds ratios (cORs for RVA and RVB detection were lower for 1-3 day old pigs when compared to any other age group. However, the cOR of RVC detection in 1-3 day old pigs was significantly higher (p 55 day old age groups. Furthermore, pigs in the 21-55 day old age group had statistically higher cORs of RV co-detection compared to 1-3 day old pigs (p < 0.001. The 3L-MLMs indicated that RV status was more similar within states than among states or within each region. Our results indicated that 3L-MLMs are a powerful and adaptable tool to handle and analyze large-hierarchical datasets. In addition, our results indicated that, overall, swine RV epidemiology is complex, and RV species are associated with different age groups and vary by regions in North America.
Fabian C.C. Uzoh; William W. Oliver
2008-01-01
A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...
Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean
2016-01-01
... (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients...
欧光龙; 胥辉; 王俊峰; 肖义发; 陈科屹; 郑海妹
2015-01-01
本研究以云南省普洱市的思茅松天然林为对象，调查了3个位点45块样地的林分地上、根系和总生物量。以幂函数模型为基础构建林分生物量的基本模型；采用混合效应模型技术，考虑区域效应随机效应，选择基本混合效应模型，并分析模型的方差和协方差结构，分别构建3个维量的区域效应随机效应的混合效应模型；考虑林分因子、地形因子和气象因子固定效应，构建含环境因子固定效应和区域效应随机效应的林分生物量混合效应模型。所有模型均采用拟合指标和独立检验指标进行评价。结果表明：1)从模型拟合情况看，考虑区域效应的随机效应模型均能显著提高一般回归模型的精度；在3类含环境因子固定效应模型中，含地形因子固定效应的区域混合效应模型均具有最低的AIC和BIC值，表现最好；2)就模型独立性检验看，除地形因子固定效应的林分根系混合效应模型外，其余模型均优于一般回归模型；考虑环境因子固定效应的混合效应模型与普通区域效应混合模型相比，各个维量模型的独立性检验指标表现不一，但总体上差异不大；3)综合考虑模型拟合和独立性检验结果，除林分根系生物量选择普通区域效应混合模型外，另2个维量均选择含地形因子固定效应和区域效应随机效应的混合效应模型。%In this paper we took natural Simao pine ( Pinus kesiya var. langbianensis) forest as the research object, and investigated the aboveground, root and total biomass of 45 plots of at three typical sites ( Tongguan town of Mojiang County, Yunxian town of Simao District, and Nuofu town of Lancang County) in Pu'er City, Yunnan Province. Firstly, we chose the best power function to the basic model. Secondly, considering random effect of the regional effect we constructed the mixed effects models of the biomass components of stand
Kwon, Amy M; Shin, Chol
2016-04-01
It is an important public health problem to identify risk factors of health-related quality of life (HRQoL) among the elderly. We recruited subjects from Ansan, Korea, as a subset of the Korean Genome and Epidemiology Study (KoGES), which is an ongoing population study, and followed up their sleep quality for 6 years. Mixed effect models were used to estimate the association between sleep quality and HRQoL, and we found that overall HRQoL was significantly lower to the elderly having poor sleep quality with adjustment for significant covariates although sleep quality showed a significant interaction effect with time for the mental component summary of SF-12. In particular, the elderly having lack of quality sleep appeared to have good general health, but their functional performances were significantly poor.
Verkaik-Kloosterman, Janneke; Dodd, Kevin W; Dekkers, Arnold L M; van 't Veer, Pieter; Ocké, Marga C
2011-11-01
Statistical modeling of habitual micronutrient intake from food and dietary supplements using short-term measurements is hampered by heterogeneous variances and multimodality. Summing short-term intakes from food and dietary supplements prior to simple correction for within-person variation (first add then shrink) may produce estimates of habitual total micronutrient intake so badly biased as to be smaller than estimates of habitual intake from food sources only. A 3-part model using a first shrink then add approach is proposed to estimate the habitual micronutrient intake from food among nonsupplement users, food among supplement users, and supplements. The population distribution of habitual total micronutrient intake is estimated by combining these 3 habitual intake distributions, accounting for possible interdependence between Eq. 2 and 3. The new model is an extension of a model developed by the USA National Cancer Institute. Habitual total vitamin D intake among young children was estimated using the proposed model and data from the Dutch food consumption survey (n = 1279). The model always produced habitual total intakes similar to or higher than habitual intakes from food sources only and also preserved the multimodal shape of the observed total vitamin D intake distribution. This proposed method incorporates several sources of covariate information that should provide more precise estimates of the habitual total intake distribution and the proportion of the population with intakes below/above cutpoint values. The proposed methodology could be useful for other complex situations, e.g. where high concentrations of micronutrients appear in episodically consumed foods.
Verkaik-Kloosterman, J.; Dodd, K.W.; Dekkers, A.L.M.; Veer, van 't P.; Ocke, M.C.
2011-01-01
Statistical modeling of habitual micronutrient intake from food and dietary supplements using short-term measurements is hampered by heterogeneous variances and multimodality. Summing short-term intakes from food and dietary supplements prior to simple correction for within-person variation (first a
A Class of New Biased Estimators for Coefficients in Mixed Effect Linear Model%混合系数线性模型参数的一类有偏估计
张华伟
2013-01-01
Ab stract:In the repeated -measures data model,in order to deal with the multicollinearity,an estimator called s -K -B for the parameters in mixed effect linear model is proposed.Under certain conditions,s -K -B estimators are shown to superior to the Ridge estimators,Stein estimators,s -K estimators and least squared estimator,respectively.% 在连续测量数据的情况下，针对模型的复共线性，本文给出了混合系数线性模型参数的一类有偏估计，称之为 s －K －B 估计。在一定条件下证明了这类估计分别优于岭估计，Stein 估计，s －K 估计以及最小二乘估计。
Mixed effects in stochastic differential equation models
Ditlevsen, Susanne; De Gaetano, Andrea
2005-01-01
maximum likelihood; pharmacokinetics; population estimates; random effects; repeated measurements; stochastic processes......maximum likelihood; pharmacokinetics; population estimates; random effects; repeated measurements; stochastic processes...
吴密霞; 赵延
2014-01-01
混合效应模型是统计模型中非常重要的一类模型,广泛地应用到许多领域.本文比较了该模型下方差分量的两种估计:方差分析(ANOVA)估计和谱分解(SD)估计,借助吴密霞和王松桂[A new method of spectral decomposition of covariance matrix in mixed effects models and its applications,Sci.China,Ser.A,2005,48:1451-1464]协方差矩阵的谱分解结果,给出了ANOVA估计和SD估计相等的两个充分条件及其相应的统计性质,并将以上的结果应用于圆形部件数据模型和混合方差分析模型.
Stochastic nonlinear mixed effects: a metformin case study.
Matzuka, Brett; Chittenden, Jason; Monteleone, Jonathan; Tran, Hien
2016-02-01
In nonlinear mixed effect (NLME) modeling, the intra-individual variability is a collection of errors due to assay sensitivity, dosing, sampling, as well as model misspecification. Utilizing stochastic differential equations (SDE) within the NLME framework allows the decoupling of the measurement errors from the model misspecification. This leads the SDE approach to be a novel tool for model refinement. Using Metformin clinical pharmacokinetic (PK) data, the process of model development through the use of SDEs in population PK modeling was done to study the dynamics of absorption rate. A base model was constructed and then refined by using the system noise terms of the SDEs to track model parameters and model misspecification. This provides the unique advantage of making no underlying assumptions about the structural model for the absorption process while quantifying insufficiencies in the current model. This article focuses on implementing the extended Kalman filter and unscented Kalman filter in an NLME framework for parameter estimation and model development, comparing the methodologies, and illustrating their challenges and utility. The Kalman filter algorithms were successfully implemented in NLME models using MATLAB with run time differences between the ODE and SDE methods comparable to the differences found by Kakhi for their stochastic deconvolution.
Adrian Ioana; Tiberiu Socaciu
2013-01-01
The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...
Gosho, Masahiko; Maruo, Kazushi; Ishii, Ryota; Hirakawa, Akihiro
2016-11-16
The total score, which is calculated as the sum of scores in multiple items or questions, is repeatedly measured in longitudinal clinical studies. A mixed effects model for repeated measures method is often used to analyze these data; however, if one or more individual items are not measured, the method cannot be directly applied to the total score. We develop two simple and interpretable procedures that infer fixed effects for a longitudinal continuous composite variable. These procedures consider that the items that compose the total score are multivariate longitudinal continuous data and, simultaneously, handle subject-level and item-level missing data. One procedure is based on a multivariate marginalized random effects model with a multiple of Kronecker product covariance matrices for serial time dependence and correlation among items. The other procedure is based on a multiple imputation approach with a multivariate normal model. In terms of the type-1 error rate and the bias of treatment effect in total score, the marginalized random effects model and multiple imputation procedures performed better than the standard mixed effects model for repeated measures analysis with listwise deletion and single imputations for handling item-level missing data. In particular, the mixed effects model for repeated measures with listwise deletion resulted in substantial inflation of the type-1 error rate. The marginalized random effects model and multiple imputation methods provide for a more efficient analysis by fully utilizing the partially available data, compared to the mixed effects model for repeated measures method with listwise deletion.
Size measurement of nano-particles using self-mixing effect
Huarui Wang; Jianqi Shen
2008-01-01
In this letter, the technique of laser self-mixing effect is employed for nano-particle size analysis. In contrast to the photon correlation spectroscopy (PCS) and photon cross correlation spectroscopy (PCCS),the main advantages of this technique are sensitive, compact, low-cost, and simple experimental setup etc.An improved Kaczmarz projection method is developed in the inversion problem to extract the particle size distribution. The experimental results prove that nano-particle size can be measured reasonably by using the self-mixing effect technique combined with the improved projection algorithm.
Ozturk, I.; Ottosen, C.O.; Ritz, Christian
2011-01-01
conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 µmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... described the photosynthetic responses. Th e analysis indicated signifi cant diff erences in light saturated photosynthetic rates and in light compensation points. Th e cultivar with the lower light compensation points (Escimo) maintained a higher carbon gain despite its lower (but not-signifi cant) quantum...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi...
Öztürk, I.; Ottosen, C.O.; Ritz, C.
2011-01-01
conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 μmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... described the photosynthetic responses. Th e analysis indicated signifi cant diff erences in light saturated photosynthetic rates and in light compensation points. Th e cultivar with the lower light compensation points (Escimo) maintained a higher carbon gain despite its lower (but not-signifi cant) quantum...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi...
Mixed-effects and fMRI studies
Friston, K.J; Stephan, K.E; Ellegaard Lund, Torben
2005-01-01
This note concerns mixed-effect (MFX) analyses in multisession functional magnetic resonance imaging (fMRI) studies. It clarifies the relationship between mixed-effect analyses and the two-stage 'summary statistics' procedure (Holmes, A.P., Friston, K.J., 1998. Generalisability, random effects...... and population inference. NeuroImage 7, S754) that has been adopted widely for analyses of fMRI data at the group level. We describe a simple procedure, based on restricted maximum likelihood (ReML) estimates of covariance components, that enables full mixed-effects analyses in the context of statistical...
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
赵浩彦; 张民侠; 张洁; 方彦; 邵长生; 陈戈萍
2015-01-01
In order to describe the relation between ground diameters and diameters at breast height (DBH) ofPinus massoniana Lamb trees in Nanjing, related mathematical models were established with 11 monadic models (linear equation, power equation, hyperbolic equation and so on) and multivariate nonlinear mixed effect model based on the data of 531Pinus massoniana Lamb trees from 28 20 m×20 m square plots. The results reveal that the correlation indexes of all monadic models were more than 0.8, the fit goodness of monadic linear model was best (R2=0.919,SEE=1.8548), and the values ofTRB,E andP(TRB=-1.34%,E=1.33%, P=7.734%) were least in all monadic models. The adaptability test shows that the monadic linear model has more widespread suitability (E=1.125%,P=7.4645%). The established multivariate nonlinear mixed effect model has higher accuracy andift goodness compared to monadic linear model (R2=0.9504,SEE=1.7973,TRB=0.32%,E=0.313%,P=5.6428%). The suitability test reveals that the multivariate nonlinear mixed effect model has more widespread application (E=-0.0888%,P=5.3594%), which indicates the multivariate mixed effect model canift more precisely the relation between ground diameters (d0) and DBHs ofPinus massoniana Lamb trees in Nanjing than monadic linear model .%为了建立南京地区马尾松Pinus massoniana根径与胸径的相关关系模型，基于28块20 m×20 m的方形样地的531株马尾松根径和胸径数据，分别采用线性、乘幂、双曲线、三次曲线等11种一元模型和多元非线性混合效应模型拟合了马尾松根径和胸径的相关关系。结果表明，所有一元模型的相关指数均大于0.8，其中，一元线性模型的拟合优度最高（R2=0.919，SEE=1.8548），总相对偏差、平均相对误差和平均相对误差绝对值最小（TRB=-1.34%，E=1.33%，P=7.734%）。适应性检验结果显示，一元线性模型有较好的适用性（E=1.125%，P=7.4645%）。与一元线性模型相比，多元非
Georgiana Cristina NUKINA
2012-07-01
Full Text Available Through Risk analysis developed model deciding whether control measures suitable for implementation. However, the analysis determines whether the benefits of a data control options cost more than the implementation.
Local mixing effects of screw elements during extrusion
Einde, van den R.M.; Kroon, P.J.; Goot, van der A.J.; Boom, R.M.
2005-01-01
An in-line method was applied to determine local residence time distribution (RTD) at two places in a completely filled corotating twin screw extruder. Axial mixing effects of different types of elements were evaluated. Paddles +90 degrees induced flow patterns that appear to be circular, both
Communication Analysis modelling techniques
España, Sergio; Pastor, Óscar; Ruiz, Marcela
2012-01-01
This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...
Tran, H. [Laboratoire Interuniversitaire des Systemes Atmospheriques (LISA, CNRS UMR 7583), Universite Paris XII, Avenue du General de Gaulle, Batiment 350, 94010 Creteil Cedex (France); Flaud, P.-M. [Laboratoire de Physico Chimie Moleculaire (LPCM UMR 5803), Universite Bordeaux 1, Bat. A12, 33405 Talence cedex (France); Gabard, T. [Laboratoire de Physique de l' Universite de Bourgogne (LPUB, CNRS UMR 5027), Faculte des Sciences Mirande, 9 Avenue Alain Savary, B.P. 47870, 21078 Dijon Cedex (France); Hase, F. [Forschungszentrum Karlsruhe, Institute of Meteorology and Climate research (IMK), P.O. Box 3640, D-76021 Karlsruhe (Germany); Clarmann, T. von [Forschungszentrum Karlsruhe, Institute of Meteorology and Climate research (IMK), P.O. Box 3640, D-76021 Karlsruhe (Germany); Camy-Peyret, C. [Laboratoire de Physique Moleculaire pour l' Atmosphere et l' Astrophysique (LPMAA, CNRS UMR 7092), Universite Pierre et Marie Curie, 4 Place Jussieu, Tour 13, Case 76, 75252 Paris Cedex 05 (France); Payan, S. [Laboratoire de Physique Moleculaire pour l' Atmosphere et l' Astrophysique (LPMAA, CNRS UMR 7092), Universite Pierre et Marie Curie, 4 Place Jussieu, Tour 13, Case 76, 75252 Paris Cedex 05 (France); Hartmann, J.-M. [Laboratoire Interuniversitaire des Systemes Atmospheriques (LISA, CNRS UMR 7583), Universite Paris XII, Avenue du General de Gaulle, Batiment 350, 94010 Creteil Cedex (France)]. E-mail: hartmann@lisa.univ-paris12.fr
2006-09-15
Absorption spectra of the infrared {nu} {sub 3} and {nu} {sub 4} bands of CH{sub 4} perturbed by N{sub 2} over large ranges of pressure and temperature have been measured in the laboratory. A theoretical approach accounting for line mixing is proposed to (successfully) model these experiments. It is similar to that of Pieroni et al. [J Chem Phys 1999;110:7717-32] and is based on state-to-state rotational cross-sections calculated with a semi-classical approach and a few empirical parameters. The latter, which enable switching from the state space to the line space, are deduced from a fit of a single room temperature spectrum of the {nu} {sub 3} band at 50 atm. The comparisons between numerous measured and calculated spectra under a vast variety of conditions ({nu} {sub 3} and {nu} {sub 4}, 0-500 atm, 170-300 K) then demonstrate the quality and consistency of the proposed model. This success is a first validation of a database and associated software built in order to model the shape of CH{sub 4} absorption in air, that are available and suitable for the updating of atmospheric radiative transfer codes. The accuracy of these tools is then further demonstrated using transmission measurements of the Earth atmosphere in the {nu} {sub 3} region (3 {mu}m) recorded in solar absorption with ground and balloon based Fourier transform instruments. Similar tests in the {nu} {sub 4} region using satellite based emission spectra and ground-based transmission measurements confirm the model quality although they show very small line-mixing effects and their masking by strong contributions of other species.
KOELINK, MH; SLOT, M; DEMUL, FFM; GREVE, J; GRAAFF, R; DASSEL, ACM; AARNOUDSE, JG
1992-01-01
A laser Doppler velocimeter that consists of a semiconductor laser coupled to a fiber and that uses the self-mixing effect is presented. The velocimeter can be used for solids and fluids. A theoretical model is developed to describe the self-mixing signals as a function of the amount of feedback int
祖笑锋; 倪成才; Gorden Nigh; 覃先林
2015-01-01
empirical best linear unbiased predictor( EBLUP) , and effects of previous observations,age interval of observations and prediction span on prediction accuracy,based upon height data from 79 dominant trees of ponderosa pine in British Columbia,Canada. [Method]We randomly selected 49 trees for fitting mixed-effects models and 30 trees for validating EBLUP. The base models were Richards,Logistic,and Korf. Fit statistics,AIC,BIC and Loglik,were used as evaluation criteria,and mean squared prediction error ( MSPE) for analyzing effects of previous observations,age interval of observations and prediction span on prediction accuracy. We used the nlme function in R for model fitting,and the IML procedure in SAS for analyzing EBLUP prediction. To isolate the effect of one factor,we kept two other factors fixed.[Result]Fitting results showed the Logistic model had the best criteria among the three models of under investigation,indicating that it was the best-fitted model and was chosen for EBLUP prediction analysis. In the analysis of EBLUP prediction,we first introduced how to use EBLUP to predict random effects associated with a stand through a detailed example. Data from six trees, which deviated significantly from population-mean growth process,were used to present relationships among individual growth,population-mean growth, and adjusted values given by EBLUP. The results indicated that EBLUP prediction could fully follow individual growth process,given that there were multiple previous observations with long-enough age intervals. EBLUP analysis results also presented the number of previous observations,age interval of observations and prediction span significantly affected prediction accuracy. MSPE decreased as the number of previous observations increased,particularly when observations separated long enough in age so that they could give more efficient growth information. With respect to prediction span, prediction accuracy decreased as prediction span extended further away
Lee, S.
2011-05-05
The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and
2015-05-15
MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - model calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R
Model Checking as Static Analysis
Zhang, Fuyuan
Both model checking and static analysis are prominent approaches to detecting software errors. Model Checking is a successful formal method for verifying properties specified in temporal logics with respect to transition systems. Static analysis is also a powerful method for validating program...... properties which can predict safe approximations to program behaviors. In this thesis, we have developed several static analysis based techniques to solve model checking problems, aiming at showing the link between static analysis and model checking. We focus on logical approaches to static analysis......-calculus can be encoded as the intended model of SFP. Our research results have strengthened the link between model checking and static analysis. This provides a theoretical foundation for developing a unied tool for both model checking and static analysis techniques....
Slavik Stefan
2014-12-01
Full Text Available The term business model has been used in practice for few years, but companies create, define and innovate their models subconsciously from the start of business. Our paper is aimed to clear the theory about business model, hence definition and all the components that form each business. In the second part, we create an analytical tool and analyze the real business models in Slovakia and define the characteristics of each part of business model, i.e., customers, distribution, value, resources, activities, cost and revenue. In the last part of our paper, we discuss the most used characteristics, extremes, discrepancies and the most important facts which were detected in our research.
Survival analysis models and applications
Liu, Xian
2012-01-01
Survival analysis concerns sequential occurrences of events governed by probabilistic laws. Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin
Wang, Lily; Jia, Peilin; Wolfinger, Russell D; Chen, Xi; Grayson, Britney L; Aune, Thomas M; Zhao, Zhongming
2011-03-01
In genome-wide association studies (GWAS) of complex diseases, genetic variants having real but weak associations often fail to be detected at the stringent genome-wide significance level. Pathway analysis, which tests disease association with combined association signals from a group of variants in the same pathway, has become increasingly popular. However, because of the complexities in genetic data and the large sample sizes in typical GWAS, pathway analysis remains to be challenging. We propose a new statistical model for pathway analysis of GWAS. This model includes a fixed effects component that models mean disease association for a group of genes, and a random effects component that models how each gene's association with disease varies about the gene group mean, thus belongs to the class of mixed effects models. The proposed model is computationally efficient and uses only summary statistics. In addition, it corrects for the presence of overlapping genes and linkage disequilibrium (LD). Via simulated and real GWAS data, we showed our model improved power over currently available pathway analysis methods while preserving type I error rate. Furthermore, using the WTCCC Type 1 Diabetes (T1D) dataset, we demonstrated mixed model analysis identified meaningful biological processes that agreed well with previous reports on T1D. Therefore, the proposed methodology provides an efficient statistical modeling framework for systems analysis of GWAS. The software code for mixed models analysis is freely available at http://biostat.mc.vanderbilt.edu/LilyWang.
Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY
2014-01-01
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.
Multiscale Signal Analysis and Modeling
Zayed, Ahmed
2013-01-01
Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3
Tang, X.; Zhu, J.; Wang, Z. F.; Gbaguidi, A.; Lin, C. Y.; Xin, J. Y.; Song, T.; Hu, B.
2015-12-01
This study investigates a cross-variable ozone data assimilation (DA) method based on an ensemble Kalman filter (EnKF) that has been validated as an efficient approach for improving ozone forecasts. The main purpose is to delve into the impacts of the cross-variable adjustment of nitrogen oxides (NOx) emissions on the nitrogen dioxide (NO2) forecasts over Beijing and surrounding regions during the 2008 Beijing Olympic Games. A mixed effect on the NO2 forecasts was observed during the application of the cross-variable assimilation approach in real-data assimilation (RDA) experiments. The method improved the NO2 forecast over almost half of the urban sites with reductions of the root mean square errors (RMSEs) by 15-36 % in contrast to big increases of the RMSEs over other urban stations by 56-239 %. Over the urban stations with negative DA impacts, improvement of the NO2 forecasts with 7 % reduction of the RMSEs was noticed during the night and the morning vs. significant deterioration of the forecasts during daytime with 190 % increase of the RMSEs, suggesting the negative DA impacts mainly occurred during daytime. Ideal data assimilation (IDA) experiments with a box model and the same cross-variable assimilation method, as a further investigation, confirmed the mixed effects found in the RDA experiments. An improvement of the NOx emission estimation was obtained from the cross-variable assimilation under relatively small errors in the prior estimation of NOx emissions during daytime, while deterioration of the NOx emission estimation was found under large biases in the prior estimation of NOx emissions during daytime. However, the cross-variable assimilation improved the NOx emission estimations during the night and the morning even with large biases in the prior estimations. The mixed effects observed in the cross-variable assimilation, i.e., positive DA impacts on NO2 forecast over some urban sites, negative DA impacts over the other urban sites and weak DA
Life sciences domain analysis model.
Freimuth, Robert R; Freund, Elaine T; Schick, Lisa; Sharma, Mukesh K; Stafford, Grace A; Suzek, Baris E; Hernandez, Joyce; Hipp, Jason; Kelley, Jenny M; Rokicki, Konrad; Pan, Sue; Buckler, Andrew; Stokes, Todd H; Fernandez, Anna; Fore, Ian; Buetow, Kenneth H; Klemm, Juli D
2012-01-01
Meaningful exchange of information is a fundamental challenge in collaborative biomedical research. To help address this, the authors developed the Life Sciences Domain Analysis Model (LS DAM), an information model that provides a framework for communication among domain experts and technical teams developing information systems to support biomedical research. The LS DAM is harmonized with the Biomedical Research Integrated Domain Group (BRIDG) model of protocol-driven clinical research. Together, these models can facilitate data exchange for translational research. The content of the LS DAM was driven by analysis of life sciences and translational research scenarios and the concepts in the model are derived from existing information models, reference models and data exchange formats. The model is represented in the Unified Modeling Language and uses ISO 21090 data types. The LS DAM v2.2.1 is comprised of 130 classes and covers several core areas including Experiment, Molecular Biology, Molecular Databases and Specimen. Nearly half of these classes originate from the BRIDG model, emphasizing the semantic harmonization between these models. Validation of the LS DAM against independently derived information models, research scenarios and reference databases supports its general applicability to represent life sciences research. The LS DAM provides unambiguous definitions for concepts required to describe life sciences research. The processes established to achieve consensus among domain experts will be applied in future iterations and may be broadly applicable to other standardization efforts. The LS DAM provides common semantics for life sciences research. Through harmonization with BRIDG, it promotes interoperability in translational science.
Frailty Models in Survival Analysis
Wienke, Andreas
2010-01-01
The concept of frailty offers a convenient way to introduce unobserved heterogeneity and associations into models for survival data. In its simplest form, frailty is an unobserved random proportionality factor that modifies the hazard function of an individual or a group of related individuals. "Frailty Models in Survival Analysis" presents a comprehensive overview of the fundamental approaches in the area of frailty models. The book extensively explores how univariate frailty models can represent unobserved heterogeneity. It also emphasizes correlated frailty models as extensions of
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
Command Process Modeling & Risk Analysis
Meshkat, Leila
2011-01-01
Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.
Model building techniques for analysis.
Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.
2009-09-01
The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.
史惟; 丁俊杰; 杨红; 廖元贵; 朱默; 侯方华; 王艺
2012-01-01
Objective To describe the patterns of gross motor development of children with cerebral palsy ( CP ) in each level of the Gross Motor Function Classification System ( GMFCS ) using nonlinear mixed effect model, as a basis for planning clinical management. Methods Patients with CP were enrolled from 7 rehabilitation centers in Shanghai form August 2000 to December 2007. Severity of CP was based solely on GMFCS level and motor function was assessed with Gross Motor Function Measure-66 ( GMFM-66 ). The stable limit model was used to make the gross motor development curve for children in each of the 5 GMFCS levels. The stable limit model has two parameters, corresponding to limit of motor function and the rate which can transforms to age-90. Age-90 means the age at which children are expected to achieve 90% of their predicted limit in GMFM-66. In addition, the results of our study were compared with those of Canada study. Results A total of 228 children ( 152 males, 76 females ) with CP were enrolled in the study. Types of CP in these children were spastic quadriplegia ( n = 63 ), spastic diplegia ( n = 87 ), spastic hemiplegia ( n = 48 ), athetotic ( n = 11 ), dystonia ( n = 4 ) and ataxic ( n = 11 ). Based on a total of 986 GMFM assessments ( 4. 32 assessments per child ), distinct motor development curves were constructed. The limit of GMFM-66 in GMFCS Ⅰ - Ⅴ level was 81.2, 62. 4, 52. 9, 40. 8 and 24. 4 scores, the corresponding age-90 was 3.8, 2. 7, 2. 1, 2. 0 and 1. 5 years respectively. GMFM-66 limit in GMFCS level I and II of our study was lower than that in Canada study, however GMFM-66 limit in GMFCS level Ⅲ — Ⅴ was closer to that in Canada study. Moreover, the corresponding age-90 in each 5 levels of GMFCS in our study was lower than that in Canada study. Conclusions The gross motor development more quickly reached its limit in GMFCS level Ⅰ and Ⅱ , however the limit of GMFM-66 was lower than that in Canada study. More attention should be paid
Langmuir mixing effects on global climate: WAVEWATCH III in CESM
Li, Qing; Webb, Adrean; Fox-Kemper, Baylor; Craig, Anthony; Danabasoglu, Gokhan; Large, William G.; Vertenstein, Mariana
2016-07-01
Large-Eddy Simulations (LES) have shown the effects of ocean surface gravity waves in enhancing the ocean boundary layer mixing through Langmuir turbulence. Neglecting this Langmuir mixing process may contribute to the common shallow bias in mixed layer depth in regions of the Southern Ocean and the Northern Atlantic in most state-of-the-art climate models. In this study, a third generation wave model, WAVEWATCH III, has been incorporated as a component of the Community Earth System Model, version 1.2 (CESM1.2). In particular, the wave model is now coupled with the ocean model through a modified version of the K-Profile Parameterization (KPP) to approximate the influence of Langmuir mixing. Unlike past studies, the wind-wave misalignment and the effects of Stokes drift penetration depth are considered through empirical scalings based on the rate of mixing in LES. Wave-Ocean only experiments show substantial improvements in the shallow biases of mixed layer depth in the Southern Ocean. Ventilation is enhanced and low concentration biases of pCFC-11 are reduced in the Southern Hemisphere. A majority of the improvements persist in the presence of other climate feedbacks in the fully coupled experiments. In addition, warming of the subsurface water over the majority of global ocean is observed in the fully coupled experiments with waves, and the cold subsurface ocean temperature biases are reduced.
Boosting Early Development: The Mixed Effects of Kindergarten Enrollment Age
Zhang, Jiahui; Xin, Tao
2012-01-01
This study aimed to investigate the effects of kindergarten enrollment age on four-year-old Chinese children's early cognition and problem behavior using multilevel models. The sample comprised of 1,391 pre-school children (the mean age is 4.58 years old) from 74 kindergartens in six different provinces. The results demonstrated curvilinear…
Model selection for amplitude analysis
Guegan, Baptiste; Stevens, Justin; Williams, Mike
2015-01-01
Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.
Accelerated life models modeling and statistical analysis
Bagdonavicius, Vilijandas
2001-01-01
Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia
Z' Mixing Effect in Stueckelberg Extended Effective Theory
ZHANG Ying
2009-01-01
Z' gauge boson often appears in extended electroweak models.As a neutral gauge bosons,Z' mixes with electroweak bcsons Z-γ in mass and kinetic parts.A general effective Lagrangian with symmetry SU(2)L (×) U(1)Y (×) U(1)' is constructed to describe Z' physics,which includes three-body mass and kinetic mixings.Z' contributions to mass eigenvalues of electroweak gauge bosons and couplings to fermions have also been discussed.
SLURRY PUMP MIXING EFFECTIVENESS IN TANK 50H
Lee, S; Richard Dimenna, R
2008-04-15
Computational Fluid Dynamics (CFD) models of Tank 50 with different numbers of pumps and operational modes, including pump rotation, have been developed to estimate flow patterns and the resultant sludge mixing. Major solid obstructions including the tank wall, the pump housing, the pump columns, and the 82-in central support column were included in the model. Transient analyses with a two-equation turbulence model were performed with FLUENT{trademark}, a commercial CFD code. All analyses were based on three-dimensional results. Recommended operational guidance was developed assuming that local fluid velocity and characteristic measures of local turbulence could be used as indicators of sludge suspension and spatial mixing. The calculation results show that three pumps, the maximum number of pumps studied, will give acceptable homogeneous mixing in about 6 minutes in terms of flow patterns and turbulent energy dissipation. These qualitative results are consistent with literature results. Sensitivity calculations have also been performed to assess the impact of different operating modes on sludge suspension and mixing. Two-pump operation provides a marginal level of sludge suspension and turbulent mixing, while one pump does not provide acceptable flow patterns and turbulent eddies for good mixing.
Economic modeling and sensitivity analysis.
Hay, J W
1998-09-01
The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.
Timing analysis by model checking
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
Ventilation Model and Analysis Report
V. Chipman
2003-07-18
This model and analysis report develops, validates, and implements a conceptual model for heat transfer in and around a ventilated emplacement drift. This conceptual model includes thermal radiation between the waste package and the drift wall, convection from the waste package and drift wall surfaces into the flowing air, and conduction in the surrounding host rock. These heat transfer processes are coupled and vary both temporally and spatially, so numerical and analytical methods are used to implement the mathematical equations which describe the conceptual model. These numerical and analytical methods predict the transient response of the system, at the drift scale, in terms of spatially varying temperatures and ventilation efficiencies. The ventilation efficiency describes the effectiveness of the ventilation process in removing radionuclide decay heat from the drift environment. An alternative conceptual model is also developed which evaluates the influence of water and water vapor mass transport on the ventilation efficiency. These effects are described using analytical methods which bound the contribution of latent heat to the system, quantify the effects of varying degrees of host rock saturation (and hence host rock thermal conductivity) on the ventilation efficiency, and evaluate the effects of vapor and enhanced vapor diffusion on the host rock thermal conductivity.
Lee, S.
2011-05-17
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitative mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single
ANALYSIS MODEL FOR INVENTORY MANAGEMENT
CAMELIA BURJA
2010-01-01
Full Text Available The inventory represents an essential component for the assets of the enterprise and the economic analysis gives them special importance because their accurate management determines the achievement of the activity object and the financial results. The efficient management of inventory requires ensuring an optimum level for them, which will guarantee the normal functioning of the activity with minimum inventory expenses and funds which are immobilised. The paper presents an analysis model for inventory management based on their rotation speed and the correlation with the sales volume illustrated in an adequate study. The highlighting of the influence factors on the efficient inventory management ensures the useful information needed to justify managerial decisions, which will lead to a balancedfinancial position and to increased company performance.
Bayesian modeling in conjoint analysis
Janković-Milić Vesna
2010-01-01
Full Text Available Statistical analysis in marketing is largely influenced by the availability of various types of data. There is sudden increase in the number and types of information available to market researchers in the last decade. In such conditions, traditional statistical methods have limited ability to solve problems related to the expression of market uncertainty. The aim of this paper is to highlight the advantages of bayesian inference, as an alternative approach to classical inference. Multivariate statistic methods offer extremely powerful tools to achieve many goals of marketing research. One of these methods is the conjoint analysis, which provides a quantitative measure of the relative importance of product or service attributes in relation to the other attribute. The application of this method involves interviewing consumers, where they express their preferences, and statistical analysis provides numerical indicators of each attribute utility. One of the main objections to the method of discrete choice in the conjoint analysis is to use this method to estimate the utility only at the aggregate level and by expressing the average utility for all respondents in the survey. Application of hierarchical Bayesian models enables capturing of individual utility ratings for each attribute level.
Distribution system modeling and analysis
Kersting, William H
2002-01-01
For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures.Distribution System Modeling and Analysis helps prevent those errors. It gives re
Restoration of Tidal Flow to Impounded Salt Marsh Exerts Mixed Effect on Leaf Litter Decomposition
Henry, B. A.; Schade, J. D.; Foreman, K.
2015-12-01
Salt marsh impoundments (e.g. roads, levees) disconnect marshes from ocean tides, which impairs ecosystem services and often promotes invasive species. Numerous restoration projects now focus on removing impoundments. Leaf litter decomposition is a central process in salt marsh carbon and nutrient cycles, and this study investigated the extent to which marsh restoration alters litter decomposition rates. We considered three environmental factors that can potentially change during restoration: salinity, tidal regime, and dominant plant species. A one-month field experiment (Cape Cod, MA) measured decay of litter bags in impounded, restored, and natural marshes under ambient conditions. A two-week lab experiment measured litter decay in controlled incubations under experimental treatments for salinity (1ppt and 30 ppt), tidal regime (inundated and 12 hr wet-dry cycles), and plant species (native Spartina alterniflora and invasive Phragmites australis). S. alterniflora decomposed faster in situ than P. australis (14±1.0% mass loss versus 0.74±0.69%). Corroborating this difference in decomposition, S. alterniflora supported greater microbial respiration during lab incubation, measured as CO2 flux from leaf litter and biological oxygen demand of water containing leached organic matter (OM). However, nutrient analysis of plant tissue and leached OM show P. australis released more nitrogen than S. alterniflora. Low salinity treatments in both lab and field experiments decayed more rapidly than high salinity treatments, suggesting that salinity inhibited microbial activity. Manipulation of inundation regime did not affect decomposition. These findings suggest the reintroduction of tidal flow to an impounded salt marsh can have mixed effects; recolonization by the native cordgrass could supply labile OM to sediment and slow carbon sequestration, while an increase in salinity might inhibit decomposition and accelerate sequestration.
Two-level mixed modeling of longitudinal pedigree data for genetic association analysis
Tan, Q.
2013-01-01
Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees which could affect statistical assessment of the genetic effects on both the mean level of the phenotype and its rate of change over the time...... assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees....... Our results show that the method efficiently handles relatedness in detecting genetic variations that affect the mean level or the rate of change for a phenotype of interest....
Probabilistic Model-Based Safety Analysis
Güdemann, Matthias; 10.4204/EPTCS.28.8
2010-01-01
Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of proba...
白伦; 张健
2013-01-01
影响渔具选择性的因素包含可控因素和不可控因素,因此混合影响模型成为了渔具选择性研究中分析网次间差异的常用模型,但目前对于这一模型的具体实现方法的讨论较少.本文从混合影响模型在拖网渔具选择性分析中的模型原理出发,通过Excel的VBA编程功能,实现了模型分析的自动化,并实现模型拟合和模型简化及优劣性判断.使用实例分析和对比后发现基于Excel VBA的这一实现方法不仅可快速、简便地进行混合影响的定性和定量分析及简化和比较,而且还有助于了解模型实现的具体方法,进而根据实际需要对模型进行修改和补充,可为今后开展渔具选择性研究提供参考.
Fang-Rong Yan
2014-01-01
Full Text Available Population pharmacokinetic (PPK models play a pivotal role in quantitative pharmacology study, which are classically analyzed by nonlinear mixed-effects models based on ordinary differential equations. This paper describes the implementation of SDEs in population pharmacokinetic models, where parameters are estimated by a novel approximation of likelihood function. This approximation is constructed by combining the MCMC method used in nonlinear mixed-effects modeling with the extended Kalman filter used in SDE models. The analysis and simulation results show that the performance of the approximation of likelihood function for mixed-effects SDEs model and analysis of population pharmacokinetic data is reliable. The results suggest that the proposed method is feasible for the analysis of population pharmacokinetic data.
Stirring and mixing effects on oscillations and inhomogeneities in the minimal bromate oscillator
Dutt, A. K.; Menzinger, M.
1999-04-01
Stirring and mixing effects on the oscillations and inhomogeneities in the bromate-bromide-cerous system (minimal bromate oscillator) have been investigated in a continuously fed stirred tank reactor (CSTR). A movable microelectrode is used to monitor the inhomogeneities inside the CSTR in an oscillating phase. The results are explained in terms of the theory of imperfect mixing.
A BAYESIAN HIERARCHICAL SPATIAL POINT PROCESS MODEL FOR MULTI-TYPE NEUROIMAGING META-ANALYSIS.
Kang, Jian; Nichols, Thomas E; Wager, Tor D; Johnson, Timothy D
2014-09-01
Neuroimaging meta-analysis is an important tool for finding consistent effects over studies that each usually have 20 or fewer subjects. Interest in meta-analysis in brain mapping is also driven by a recent focus on so-called "reverse inference": where as traditional "forward inference" identifies the regions of the brain involved in a task, a reverse inference identifies the cognitive processes that a task engages. Such reverse inferences, however, requires a set of meta-analysis, one for each possible cognitive domain. However, existing methods for neuroimaging meta-analysis have significant limitations. Commonly used methods for neuroimaging meta-analysis are not model based, do not provide interpretable parameter estimates, and only produce null hypothesis inferences; further, they are generally designed for a single group of studies and cannot produce reverse inferences. In this work we address these limitations by adopting a non-parametric Bayesian approach for meta analysis data from multiple classes or types of studies. In particular, foci from each type of study are modeled as a cluster process driven by a random intensity function that is modeled as a kernel convolution of a gamma random field. The type-specific gamma random fields are linked and modeled as a realization of a common gamma random field, shared by all types, that induces correlation between study types and mimics the behavior of a univariate mixed effects model. We illustrate our model on simulation studies and a meta analysis of five emotions from 219 studies and check model fit by a posterior predictive assessment. In addition, we implement reverse inference by using the model to predict study type from a newly presented study. We evaluate this predictive performance via leave-one-out cross validation that is efficiently implemented using importance sampling techniques.
Hugo Hesser
2015-05-01
Full Text Available Growth models (also known as linear mixed effects models, multilevel models, and random coefficients models have the capability of studying change at the group as well as the individual level. In addition, these methods have documented advantages over traditional data analytic approaches in the analysis of repeated-measures data. These advantages include, but are not limited to, the ability to incorporate time-varying predictors, handle dependence among repeated observations in a very flexible manner, and to provide accurate estimates with missing data under fairly unrestrictive missing data assumptions. The flexibility of the growth curve modeling approach to the analysis of change makes it the preferred choice in the evaluation of direct, indirect and moderated intervention effects. Although offering many benefits, growth models present challenges in terms of design, analysis and reporting of results. This paper provides a nontechnical overview of growth models in the analysis of change in randomized experiments and advocates for their use in the field of internet interventions. Practical recommendations for design, analysis and reporting of results from growth models are provided.
Geologic Framework Model Analysis Model Report
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Hierarchical linear modeling of longitudinal pedigree data for genetic association analysis.
Tan, Qihua; B Hjelmborg, Jacob V; Thomassen, Mads; Jensen, Andreas Kryger; Christiansen, Lene; Christensen, Kaare; Zhao, Jing Hua; Kruse, Torben A
2014-01-01
Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees, which could affect statistical assessment of the genetic effects. Approaches have been proposed to integrate kinship correlation into the mixed-effect models to explicitly model the genetic relationship. These have proved to be an efficient way of dealing with sample clustering in pedigree data. Although current algorithms implemented in popular statistical packages are useful for adjusting relatedness in the mixed modeling of genetic effects on the mean level of a phenotype, they are not sufficiently straightforward to handle the kinship correlation on the time-dependent trajectories of a phenotype. We introduce a 2-level hierarchical linear model to separately assess the genetic associations with the mean level and the rate of change of a phenotype, integrating kinship correlation in the analysis. We apply our method to the Genetic Analysis Workshop 18 genome-wide association studies data on chromosome 3 to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees. Our method identifies genetic variants associated with blood pressure with estimated inflation factors of 0.99, suggesting that our modeling of random effects efficiently handles the genetic relatedness in pedigrees. Application to simulated data captures important variants specified in the simulation. Our results show that the method is useful for genetic association studies in related samples using longitudinal design.
Extensions in model-based system analysis
Graham, Matthew R.
2007-01-01
Model-based system analysis techniques provide a means for determining desired system performance prior to actual implementation. In addition to specifying desired performance, model-based analysis techniques require mathematical descriptions that characterize relevant behavior of the system. The developments of this dissertation give ex. tended formulations for control- relevant model estimation as well as model-based analysis conditions for performance requirements specified as frequency do...
Juul, Rasmus V; Knøsgaard, Katrine R; Olesen, Anne E
2016-01-01
Joint analysis of pain intensity and opioid consumption is encouraged in trials of postoperative pain. However, previous approaches have not appropriately addressed the complexity of their interrelation in time. In this study, we applied a non-linear mixed effects model to simultaneously study pain...... intensity and opioid consumption in a 4-h postoperative period for 44 patients undergoing percutaneous kidney stone surgery. Analysis was based on 748 Numerical Rating Scale (NRS) scores of pain intensity and 51 observed morphine and oxycodone dosing events. A joint model was developed to describe...... the recurrent pattern of four key phases determining the development of pain intensity and opioid consumption in time; (A) Distribution of pain intensity scores which followed a truncated Poisson distribution with time-dependent mean score ranging from 0.93 to 2.45; (B) Probability of transition to threshold...
Analysis of sensory ratings data with cumulative link models
Christensen, Rune Haubo Bojesen; Brockhoff, Per B.
2013-01-01
Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow...... for regression methods similar to linear models while respecting the categorical nature of the observations. We describe how cumulative link models are related to the omnibus chi2 tests and how they can lead to more powerful tests in the non-replicated setting. For replicated categorical ratings data we present...... a quasi-likelihood approach and a mixed effects approach both being extensions of cumulative link models. We contrast population-average and subject-specific interpretations based on these models and discuss how different approaches lead to different tests. In replicated settings, naive tests that ignore...
Prabhpreet Kaur,
2014-01-01
Full Text Available In this paper, the four wave mixing effect on sixteen channel wavelength divison multiplexing has been compared for different modulation formats at various values of dispersion, core effective area, channel spacing.The performance of system has been evaluated in terms of four wave mixing power, BER and Q-factor.This paper simulates that with increase in the channel spacing,core effective area of fiber, signal interference between input signals decreases hence four wave mixing effect also decreases. It has been observed that for duobinary FWM decreases 1dBm more than NRZ. So duobinary modulation format is best suitable technique to reduce four wave mixing power by varying dispersion from 0 to 4 ps/nm.km, core effective area and channel spacing.
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Modeling repeated measurement data for occupational exposure assessment and epidemiology
Peretz, Chava
2004-01-01
Repeated measurements designs, occur frequently in the assessment of exposure to toxic chemicals. This thesis deals with the possibilities of using mixed effects models for occupational exposure assessment and in the analysis of exposure response relationships. The model enables simultaneous estima
The Cosparse Analysis Model and Algorithms
Nam, Sangnam; Elad, Michael; Gribonval, Rémi
2011-01-01
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Statistical Modelling of Wind Proles - Data Analysis and Modelling
Jónsson, Tryggvi; Pinson, Pierre
The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....
ANALYSIS MODEL FOR INVENTORY MANAGEMENT
CAMELIA BURJA; VASILE BURJA
2010-01-01
The inventory represents an essential component for the assets of the enterprise and the economic analysis gives them special importance because their accurate management determines the achievement...
Analysis of Crosscutting in Model Transformations
Berg, van den K.G.; Tekinerdogan, B.; Nguyen, H.; Aagedal, J.; Neple, T.; Oldevik, J.
2006-01-01
This paper describes an approach for the analysis of crosscutting in model transformations in the Model Driven Architecture (MDA). Software architectures should be amenable to changes in user requirements and technological platforms. Impact analysis of changes can be based on traceability of archite
Two sustainable energy system analysis models
Lund, Henrik; Goran Krajacic, Neven Duic; da Graca Carvalho, Maria
2005-01-01
This paper presents a comparative study of two energy system analysis models both designed with the purpose of analysing electricity systems with a substantial share of fluctuating renewable energy.......This paper presents a comparative study of two energy system analysis models both designed with the purpose of analysing electricity systems with a substantial share of fluctuating renewable energy....
Brief analysis of Blog Websites' business models
魏娟
2009-01-01
Analysis continues using this framework of several major Blogs or Blog websites. From this analysis, three main weblog business models that are currently in operation will be introduced as well as described. As a part of this framework, this paper will also analyze the future viability of the models.
OVERLOAD ANALYSIS OF MARKOVIAN MODELS
Yiqiang Q. ZHAO
1999-01-01
A new procedure for computing stationary probabilities for an overloaded Markovian model is proposed interms of the rotated Markov chain.There are two advantages to use this procedure:i) This procedure allows us to approximate an overloaded finite model by using a stable infinite Markov chain. This will makethe study easier when the infinite model has a simpler solution.ii) Numerically, this procedure often significantly reduces the number of computations and the requirement of computer memory. By using different examples,we specifically demonstratethe process of implementing this rotating procedure.
Line mixing effect in the ν2 band of CH3Br
Hmida, F.; Galalou, S.; Tchana, F. Kwabia; Rotger, M.; Aroui, H.
2017-03-01
Line intensities, self broadening coefficients, as well as line mixing parameters and self-shift coefficients have been measured in the ν2 parallel band of CH3Br at room temperature for 38 rovibrational doublets with rotational quantum numbers 4≤J≤47 and K=0, 1. Measurements were made in the P and R branches located in the spectral range from 1260 to 1332 cm-1 using high-resolution Fourier transform spectra. These spectroscopic parameters have been retrieved from twelve spectra recorded at different pressures of pure CH3Br from 0.2 to 6.8 Torr. The spectra have been analyzed using a multi-pressure non-linear least squares fitting of Rosenkranz profile taking into account line mixing effect. These spectra and results of pressure broadening coefficients and line intensities obtained with and without taking into account line mixing effect are compared, analyzed and discussed as function of the rotational quantum numbers and the branch. Analyzing of overlapped lines demonstrates an important mixing effect between the doublets components. On average the values of these spectroscopic parameters obtained when taking into account line mixing were found to be about 5% smaller than those obtained without taking into account this effect. On average, the accuracies of self-broadening coefficients and line intensities are estimated to be better than 3.8%. The mean accuracies of line-mixing and line-shift data are estimated to be about 20% and 17% respectively. The measured line mixing parameters are both positive and negative, while most of the lines have a negative shift coefficient.
Bayesian analysis of CCDM Models
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2016-01-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, leads to negative creation pressure, which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical tools, at light of SN Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These approaches allow to compare models considering goodness of fit and numbe...
Modeling and analysis using hybrid Petri nets
Ghomri, Latéfa
2007-01-01
This paper is devoted to the use of hybrid Petri nets (PNs) for modeling and control of hybrid dynamic systems (HDS). Modeling, analysis and control of HDS attract ever more of researchers' attention and several works have been devoted to these topics. We consider in this paper the extensions of the PN formalism (initially conceived for modeling and analysis of discrete event systems) in the direction of hybrid modeling. We present, first, the continuous PN models. These models are obtained from discrete PNs by the fluidification of the markings. They constitute the first steps in the extension of PNs toward hybrid modeling. Then, we present two hybrid PN models, which differ in the class of HDS they can deal with. The first one is used for deterministic HDS modeling, whereas the second one can deal with HDS with nondeterministic behavior. Keywords: Hybrid dynamic systems; D-elementary hybrid Petri nets; Hybrid automata; Controller synthesis
A Bayesian nonparametric meta-analysis model.
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G
2015-03-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall effect size, such models may be adequate, but for prediction, they surely are not if the effect-size distribution exhibits non-normal behavior. To address this issue, we propose a Bayesian nonparametric meta-analysis model, which can describe a wider range of effect-size distributions, including unimodal symmetric distributions, as well as skewed and more multimodal distributions. We demonstrate our model through the analysis of real meta-analytic data arising from behavioral-genetic research. We compare the predictive performance of the Bayesian nonparametric model against various conventional and more modern normal fixed-effects and random-effects models.
[Dimensional modeling analysis for outpatient payments].
Guo, Yi-zhong; Guo, Yi-min
2008-09-01
This paper introduces a data warehouse model for outpatient payments, which is designed according to the requirements of the hospital financial management while dimensional modeling technique is combined with the analysis on the requirements. This data warehouse model can not only improve the accuracy of financial management requirements, but also greatly increase the efficiency and quality of the hospital management.
Model correction factor method for system analysis
Ditlevsen, Ove Dalager; Johannesen, Johannes M.
2000-01-01
The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....
Video Analysis and Modeling in Physics Education
Brown, Doug
2008-03-01
The Tracker video analysis program allows users to overlay simple dynamical models on a video clip. Video modeling offers advantages over both traditional video analysis and animation-only modeling. In traditional video analysis, for example, students measure ``g'' by tracking a dropped or tossed ball, constructing a position or velocity vs. time graph, and interpreting the graphs to obtain initial conditions and acceleration. In video modeling, by contrast, the students interactively construct theoretical force expressions and define initial conditions for a dynamical particle model that synchs with and draws itself on the video. The behavior of the model is thus compared directly with that of the real-world motion. Tracker uses the Open Source Physics code library so sophisticated models are possible. I will demonstrate and compare video modeling with video analysis and I will discuss the advantages of video modeling over animation-only modeling. The Tracker video analysis program is available at: http://www.cabrillo.edu/˜dbrown/tracker/.
Program Analysis as Model Checking
Olesen, Mads Chr.
and abstract interpretation. Model checking views the program as a finite automaton and tries to prove logical properties over the automaton model, or present a counter-example if not possible — with a focus on precision. Abstract interpretation translates the program semantics into abstract semantics...... problems as the other by a reformulation. This thesis argues that there is even a convergence on the practical level, and that a generalisation of the formalism of timed automata into lattice automata captures key aspects of both methods; indeed model checking timed automata can be formulated in terms...... of an abstract interpretation. For the generalisation to lattice automata to have benefit it is important that efficient tools exist. This thesis presents multi-core tools for efficient and scalable reachability and Büchi emptiness checking of timed/lattice automata. Finally, a number of case studies...
CMS Data Analysis School Model
Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T
2014-01-01
To impart hands-on training in physics analysis, CMS experiment initiated the Â concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorialsÂ and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained inÂ six CMSDAS around the globe , CMS is trying toÂ Â engage the collaboration discovery potential and maximize the physics output. As a bigger goal,Â CMS is striving to nurture and increase engagement of the myriad talentsÂ of CMS, in the development of physics, service, upgrade, education ofÂ those new to CMS and the caree...
Qualitative Analysis of Somitogenesis Models
Maschke-Dutz E.
2007-12-01
Full Text Available Although recently the properties of a single somite cell oscillator have been intensively investigated, the system-level nature of the segmentation clock remains largely unknown. To elaborate qualitatively this question, we examine the possibility to transform a well-known time delay somite cell oscillator to dynamical system of differential equations allowing qualitative analysis.
System of systems modeling and analysis.
Campbell, James E.; Anderson, Dennis James; Longsine, Dennis E. (Intera, Inc., Austin, TX); Shirah, Donald N.
2005-01-01
This report documents the results of an LDRD program entitled 'System of Systems Modeling and Analysis' that was conducted during FY 2003 and FY 2004. Systems that themselves consist of multiple systems (referred to here as System of Systems or SoS) introduce a level of complexity to systems performance analysis and optimization that is not readily addressable by existing capabilities. The objective of the 'System of Systems Modeling and Analysis' project was to develop an integrated modeling and simulation environment that addresses the complex SoS modeling and analysis needs. The approach to meeting this objective involved two key efforts. First, a static analysis approach, called state modeling, has been developed that is useful for analyzing the average performance of systems over defined use conditions. The state modeling capability supports analysis and optimization of multiple systems and multiple performance measures or measures of effectiveness. The second effort involves time simulation which represents every system in the simulation using an encapsulated state model (State Model Object or SMO). The time simulation can analyze any number of systems including cross-platform dependencies and a detailed treatment of the logistics required to support the systems in a defined mission.
Ranking Models in Conjoint Analysis
K.Y. Lam (Kar Yin); A.J. Koning (Alex); Ph.H.B.F. Franses (Philip Hans)
2010-01-01
textabstractIn this paper we consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we do not need to compute high-dimensional integrals. We extend the approximation technique proposed by
Analysis Models for Security Protocols
Corin, R.J.; Corin, Ricardo Javier
2006-01-01
In this thesis, we present five significant, orthogonal extensions to the Dolev Yao model. Each extension considers a more realistic setting, closer to the real world, thus providing a stronger security guarantee. We provide examples both from the literature and from industrial case studies to show
Hierarchical modeling and analysis for spatial data
Banerjee, Sudipto; Gelfand, Alan E
2003-01-01
Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat
Hypersonic - Model Analysis as a Service
Acretoaie, Vlad; Störrle, Harald
2014-01-01
Hypersonic is a Cloud-based tool that proposes a new approach to the deployment of model analysis facilities. It is implemented as a RESTful Web service API o_ering analysis features such as model clone detection. This approach allows the migration of resource intensive analysis algorithms from...... monolithic desktop modeling tools to a wide range of mobile and Web-based clients. As a technology demonstrator, a Web application acting as a client for the Hypersonic API has been implemented and made publicly available....
周萌; 何平
2011-01-01
Based on a water model with the size of one fifth of the real CAS ladle, the improvement of the mixing effect in CAS-OB process using double-nozzles bottom blowing technology was researched by analyzing the flow field and determining the mixing time. Results show that double-nozzles bottom blowing technology could greatly reduce the mixing time, also could change the flow field in the low-intensity stirring area of ladle and improve the mixing effect in the middle and lower part of ladle. The experimental formula for expressing the effect of bottom blowing flow rate Q and depth of inserting snorkel H on mixing time( T-T0——T0=0. 59Q-0.74 ( H—HL) 0.52) is obtained by free factors analysis.%按照1:5相似比建立CAS钢包模型,通过片光源拍照观察流场和电导率仪测定混匀时间,研究双底吹技术对CAS钢包内弱搅拌区混匀特性的改善情况.结果表明双底吹技术能较大程度降低钢包混匀时间,改善弱搅拌区的流场特点,提高钢包中下部的混匀效果.通过无因次分析,得出辅吹气流量Q和浸罩插入深度H对混匀时间影响程度的经验公式:T-T0/T0=0.59Q-0.74(H/HL)0.52.
Mixing Effect of Polyoxyethylene-Type Nonionic Surfactants on the Liquid Crystalline Structures.
Kunieda; Umizu; Yamaguchi
1999-10-01
An effective cross-sectional area per surfactant molecule at hydrophobic interfaces of aggregates, a(S), in hexagonal (H(1)) and lamellar (L(alpha)) liquid crystals was calculated in homogeneous and mixed polyoxyethylene dodecyl ether systems as a function of polyoxyethylene (EO) chain length by means of small-angle X-ray scattering. The a(S) increases with increasing the EO chain length. The a(S) in the mixed surfactant system is considerably smaller than that in the single surfactant system, even if the average EO chain length is the same. The reduction of a(S) is larger than that predicted by ideal mixing of the surfactants. Moreover, if the EO chain lengths of the surfactants are more separated, the a(S) is smaller. The shapes of surfactant self-organizing structures may be governed by the balance of the attractive and the repulsive forces acting at the hydrophobic interfaces of the aggregates. According to this consideration, the mixing effect of surfactants with the different EO chain lengths on the a(S) in the L(alpha) phase was discussed. It is considered that the surfactant molecules are tightly packed in the aggregates since the reduction in repulsion force takes place in the excess EO chain part of the hydrophilic surfactant longer than the short EO chain of the lipophilic one. The lower surface tensions and the better stability of macroemulsions and the large solubilizing capacity of microemulsions result from the mixing effect. Copyright 1999 Academic Press.
Representing uncertainty on model analysis plots
Smith, Trevor I.
2016-12-01
Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao's original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.
Modeling Decisional Situations Using Morphological Analysis
2007-01-01
Full Text Available This paper focuses on models of financial decisions in small and medium enterprises. The presented models are a part of a decision support system presented in the PhD dissertation. One of the modeling techniques used for model creation and development is morphological analysis. This technique is used for model scale reduction not by reducing the number of variables involved but by reducing the number of possible combinations between variables. In this paper we prove how this approach can be used in modeling financial decision problems.
Three-dimensional model analysis and processing
Yu, Faxin; Luo, Hao; Wang, Pinghui
2011-01-01
This book focuses on five hot research directions in 3D model analysis and processing in computer science: compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.
Identifying differentially methylated genes using mixed effect and generalized least square models
2009-01-01
Abstract Background DNA methylation plays an important role in the process of tumorigenesis. Identifying differentially methylated genes or CpG islands (CGIs) associated with genes between two tumor subtypes is thus an important biological question. The methylation status of all CGIs in the whole genome can be assayed with differential methylation hybridization (DMH) microarrays. However, patient samples or cell lines are heterogeneous, so their methylation pattern may be very different. In a...
The Utility and Application of Mixed-Effects Models in Second Language Research
Linck, Jared A.; Cunnings, Ian
2015-01-01
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a heterogeneous group, and it is not always clear how a study's findings may generalize to other individuals who may differ in terms of language background…
The Utility and Application of Mixed-Effects Models in Second Language Research
Linck, Jared A.; Cunnings, Ian
2015-01-01
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a heterogeneous group, and it is not always clear how a study's findings may generalize to other individuals who may differ in terms of language background…
Mixed-Effects Modeling of the Influence of Midazolam on Propofol Pharmacokinetics
Vuyk, Jaap; Lichtenbelt, Bart Jan; Olofsen, Erik; van Kleef, Jack W.; Dahan, Albert
BACKGROUND: The combined administration of anesthetics has been associated with pharmacokinetic interactions that induce concentration changes of up to 30%. Midazolam is often used as a preoperative sedative in advance of a propofol-based anesthetic. In this study, we identified the influence of
Analysis model of structure-HDS
无
2000-01-01
Presents the model established for Structure-HDS(hydraulic damper system) analysis on the basis of the theoretical analysis model of non-compressed fluid in the round pipe will an uniform velocity used as the basic variable, and pressure losses resulting from cross section changes of fluid route taken into consideration. Which provides necessary basis for researches on earthquake responses of a structure with a spacious first story, equipped with HDS at first floor.
Combustion instability modeling and analysis
Santoro, R.J.; Yang, V.; Santavicca, D.A. [Pennsylvania State Univ., University Park, PA (United States)] [and others
1995-10-01
It is well known that the two key elements for achieving low emissions and high performance in a gas turbine combustor are to simultaneously establish (1) a lean combustion zone for maintaining low NO{sub x} emissions and (2) rapid mixing for good ignition and flame stability. However, these requirements, when coupled with the short combustor lengths used to limit the residence time for NO formation typical of advanced gas turbine combustors, can lead to problems regarding unburned hydrocarbons (UHC) and carbon monoxide (CO) emissions, as well as the occurrence of combustion instabilities. Clearly, the key to successful gas turbine development is based on understanding the effects of geometry and operating conditions on combustion instability, emissions (including UHC, CO and NO{sub x}) and performance. The concurrent development of suitable analytical and numerical models that are validated with experimental studies is important for achieving this objective. A major benefit of the present research will be to provide for the first time an experimentally verified model of emissions and performance of gas turbine combustors.
Flux Analysis in Process Models via Causality
Kahramanoğullari, Ozan
2010-01-01
We present an approach for flux analysis in process algebra models of biological systems. We perceive flux as the flow of resources in stochastic simulations. We resort to an established correspondence between event structures, a broadly recognised model of concurrency, and state transitions of process models, seen as Petri nets. We show that we can this way extract the causal resource dependencies in simulations between individual state transitions as partial orders of events. We propose transformations on the partial orders that provide means for further analysis, and introduce a software tool, which implements these ideas. By means of an example of a published model of the Rho GTP-binding proteins, we argue that this approach can provide the substitute for flux analysis techniques on ordinary differential equation models within the stochastic setting of process algebras.
Applied research in uncertainty modeling and analysis
Ayyub, Bilal
2005-01-01
Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...
Model Checking as Static Analysis: Revisited
Zhang, Fuyuan; Nielson, Flemming; Nielson, Hanne Riis
2012-01-01
We show that the model checking problem of the μ-calculus can be viewed as an instance of static analysis. We propose Succinct Fixed Point Logic (SFP) within our logical approach to static analysis as an extension of Alternation-free Least Fixed Logic (ALFP). We generalize the notion...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
Multistructure Statistical Model Applied To Factor Analysis
Bentler, Peter M.
1976-01-01
A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)
Analysis and evaluation of collaborative modeling processes
Ssebuggwawo, D.
2012-01-01
Analysis and evaluation of collaborative modeling processes is confronted with many challenges. On the one hand, many systems design and re-engineering projects require collaborative modeling approaches that can enhance their productivity. But, such collaborative efforts, which often consist of the
A Bayesian Nonparametric Meta-Analysis Model
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
Perturbation analysis of nonlinear matrix population models
Hal Caswell
2008-03-01
Full Text Available Perturbation analysis examines the response of a model to changes in its parameters. It is commonly applied to population growth rates calculated from linear models, but there has been no general approach to the analysis of nonlinear models. Nonlinearities in demographic models may arise due to density-dependence, frequency-dependence (in 2-sex models, feedback through the environment or the economy, and recruitment subsidy due to immigration, or from the scaling inherent in calculations of proportional population structure. This paper uses matrix calculus to derive the sensitivity and elasticity of equilibria, cycles, ratios (e.g. dependency ratios, age averages and variances, temporal averages and variances, life expectancies, and population growth rates, for both age-classified and stage-classified models. Examples are presented, applying the results to both human and non-human populations.
Numerical modeling techniques for flood analysis
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Bianciardi, M; Cerasa, A; Patria, F; Hagberg, G E
2004-07-01
With the introduction of event-related designs in fMRI, it has become crucial to optimize design efficiency and temporal filtering to detect activations at the 1st level with high sensitivity. We investigate the relevance of these issues for fMRI population studies, that is, 2nd-level analysis, for a set of event-related fMRI (er-fMRI) designs with different 1st-level efficiencies, adopting three distinct 1st-level filtering strategies as implemented in SPM99, SPM2, and FSL3.0. By theory, experiments, and simulations using physiological fMRI noise, we show that both design and filtering impact the outcome of the statistical analysis, not only at the 1st but also at the 2nd level. There are several reasons behind this finding. First, sensitivity is affected by both design and filtering, since the scan-to-scan variance, that is the fixed effect, is not negligible with respect to the between-subject variance, that is the random effect, in er-fMRI population studies. The impact of the fixed effects error on the sensitivity of the mixed effects analysis can be mitigated by an optimal choice of er-fMRI design and filtering. Moreover, the accuracy of the 1st- and 2nd-level parameter estimates also depend on design and filtering; especially, we show that inaccuracies caused by the presence of residual noise autocorrelations can be constrained by designs that have hemodynamic responses with a Gaussian distribution. In conclusion, designs with both good efficiency and decorrelating properties, for example, such as the geometric or Latin square probability distributions, combined with the "whitening" filters of SPM2 and FSL3.0, give the best result, both for 1st- and 2nd-level analysis of er-fMRI studies.
Two-level mixed modeling of longitudinal pedigree data for genetic association analysis
Tan, Q.
2013-01-01
assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees...... of follow-up. Approaches have been proposed to integrate kinship correlation into the mixed effect models to explicitly model the genetic relationship which have been proven as an efficient way for dealing with sample clustering in pedigree data. Although useful for adjusting relatedness in the mixed....... Our results show that the method efficiently handles relatedness in detecting genetic variations that affect the mean level or the rate of change for a phenotype of interest....
Adsorption modeling for macroscopic contaminant dispersal analysis
Axley, J.W.
1990-05-01
Two families of macroscopic adsorption models are formulated, based on fundamental principles of adsorption science and technology, that may be used for macroscopic (such as whole-building) contaminant dispersal analysis. The first family of adsorption models - the Equilibrium Adsorption (EA) Models - are based upon the simple requirement of equilibrium between adsorbent and room air. The second family - the Boundary Layer Diffusion Controlled Adsorption (BLDC) Models - add to the equilibrium requirement a boundary layer model for diffusion of the adsorbate from the room air to the adsorbent surface. Two members of each of these families are explicitly discussed, one based on the linear adsorption isotherm model and the other on the Langmuir model. The linear variants of each family are applied to model the adsorption dynamics of formaldehyde in gypsum wall board and compared to measured data.
Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao
2016-02-01
We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and can be useful in whole-genome and whole-exome association studies.
Simulation modeling and analysis with Arena
Altiok, Tayfur
2007-01-01
Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...
Research of chemical induction unit on mixing effect and chlorine saving
Jiao Zhongzhi; Chen Zhonglin; Li ZuoLiang; Xue Zhu; Yuan Xing; Li Guibai
2007-01-01
Rapid mixing and chlorine saving are two important problems that most drinking water industries ale focus on, and this paper adopts chemical induction unit to compare with water jet injector to study what merits chemical induction unit has. The experiment chose coefficient of variability of chlorine concentration to evaluate the mix effect and used chlorine consumption to compare the two equipments. Distribution reservoir experiments show that chemical induction unit can completely mix chlorine less than 6. 2 seconds and water jet injector can not completely mix in 3 minutes. Mixing pool experiments show that chemical induction unit can save chlorine compared with water jet injector, and Can save mole if mole chlorine is consumed.
Solvent mixing effects on the electrode characteristics of secondary Li/TiS2 cells
Matsuda, Y.; Morita, M.; Takata, K.-I.
1984-09-01
Charge-discharge characteristics of Li negative and TiS2 positive electrodes have been studied in LiBF4 solutions of tetrahydrofuran (THF), 1,3-dioxolane (DOL), and their mixture. Solvent mixing effects on the charge-discharge characteristics were investigated. Charge-discharge potentials at the Li electrode in THF-DOL/LiBF4 were more stable than those in THF/LiBF4 and DOL/LiBF4, and they showed the least variation with the cycles. Ionic behavior in the solutions is discussed through the data of current-potential curves of the electrodes and electrolytic conductivity of the solutions. With respect to the TiS2 electrode, charging efficiency in THF was seemingly the best, but the efficiency variation with the cycle in DOL-THF was very small. It was concluded that advantages of solvent mixing should be evaluated totally in Li/TiS2 cells.
A multivariate nonlinear mixed effects method for analyzing energy partitioning in growing pigs
Strathe, Anders Bjerring; Danfær, Allan Christian; Chwalibog, André
2010-01-01
Simultaneous equations have become increasingly popular for describing the effects of nutrition on the utilization of ME for protein (PD) and lipid deposition (LD) in animals. The study developed a multivariate nonlinear mixed effects (MNLME) framework and compared it with an alternative method...... for estimating parameters in simultaneous equations that described energy metabolism in growing pigs, and then proposed new PD and LD equations. The general statistical framework was implemented in the NLMIXED procedure in SAS. Alternative PD and LD equations were also developed, which assumed...... that the instantaneous response curve of an animal to varying energy supply followed the law of diminishing returns behavior. The Michaelis-Menten function was adopted to represent a biological relationship in which the affinity constant (k) represented the sensitivity of PD to ME above maintenance. The approach...
Stochastic modelling and diffusion modes for POD models and small-scale flow analysis
Resseguier, Valentin; Heitz, Dominique; Chapron, Bertrand
2016-01-01
We introduce a stochastic modelling in the constitution of fluid flow reduced-order models. This framework introduces a spatially inhomogeneous random field to represent the unresolved small-scale velocity component. Such a decomposition of the velocity in terms of a smooth large-scale velocity component and a rough, highly oscillating, component gives rise, without any supplementary assumption, to a large-scale flow dynamics that includes a modified advection term together with an inhomogeneous diffusion term. Both of those terms, related respectively to turbophoresis and mixing effects, depend on the variance of the unre-solved small-scale velocity component. They bring to the reduced system an explicit subgrid term enabling to take into account the action of the truncated modes. Besides, a decomposition of the variance tensor in terms of diffusion modes provides a meaningful statistical representation of the stationary or nonstationary structuration of the small-scale velocity and of its action on the reso...
A Requirements Analysis Model Based on QFD
TANG Zhi-wei; Nelson K.H.Tang
2004-01-01
The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.
Sensitivity analysis of periodic matrix population models.
Caswell, Hal; Shyu, Esther
2012-12-01
Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments.
Decision variables analysis for structured modeling
潘启树; 赫东波; 张洁; 胡运权
2002-01-01
Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.
Stochastic model updating using distance discrimination analysis
Deng Zhongmin; Bi Sifeng; Sez Atamturktur
2014-01-01
This manuscript presents a stochastic model updating method, taking both uncertainties in models and variability in testing into account. The updated finite element (FE) models obtained through the proposed technique can aid in the analysis and design of structural systems. The authors developed a stochastic model updating method integrating distance discrimination analysis (DDA) and advanced Monte Carlo (MC) technique to (1) enable more efficient MC by using a response surface model, (2) calibrate parameters with an iterative test-analysis correlation based upon DDA, and (3) utilize and compare different distance functions as correlation metrics. Using DDA, the influence of distance functions on model updating results is analyzed. The proposed sto-chastic method makes it possible to obtain a precise model updating outcome with acceptable cal-culation cost. The stochastic method is demonstrated on a helicopter case study updated using both Euclidian and Mahalanobis distance metrics. It is observed that the selected distance function influ-ences the iterative calibration process and thus, the calibration outcome, indicating that an integra-tion of different metrics might yield improved results.
Model Selection in Data Analysis Competitions
Wind, David Kofoed; Winther, Ole
2014-01-01
The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...
Mathematical Model For Engineering Analysis And Optimization
Sobieski, Jaroslaw
1992-01-01
Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.
Mathematical Model For Engineering Analysis And Optimization
Sobieski, Jaroslaw
1992-01-01
Computational support for engineering design process reveals behavior of designed system in response to external stimuli; and finds out how behavior modified by changing physical attributes of system. System-sensitivity analysis combined with extrapolation forms model of design complementary to model of behavior, capable of direct simulation of effects of changes in design variables. Algorithms developed for this method applicable to design of large engineering systems, especially those consisting of several subsystems involving many disciplines.
Litovchenko O.L.
2015-05-01
Full Text Available At present, biochemical mechanisms of mixed effects of electromagnetic radiation (EMR and cold on the body are not adequately studied, so this problem is urgent for modern medicine. Purpose of study. Establishing pathognomonic criteria and biochemical mechanisms of adverse effect of EMR on the organism of laboratory animals in conditions of cold stress. Materials and methods. The laboratory subacute experiment was carried out on mature white male rats of WAG line, weighing 190-220 g for 1 month. The animals were divided into 4 groups of 10 animals in each group. The first group was subjected to the isolated action of electromagnetic radiation (frequency 70 kHz, tension 600 V/m at a comfortable air temperature of 25 ± 2 ° C. The second group was subjected to the mixed action of EMR and low temperature 4 ± 2°C. The third group served as a control with regard to the first group, and the fourth group - with regard to the second, at air temperature of 25 ± 2°C. Expositions were carried out 5 times a week (for 4:00 every day. To identify changes in biochemical parameters studied during the experiments, blood sampling was performed at the stages of 5, 15, 30 days and urine sampling – at the stages of 15, 30 days in dynamics. Blood serum was used as biomaterial. It was determined the content of malondialdehyde (MDA, conjugated diene, content of SH-groups, superoxide dismutase, ceruloplasmin, cholesterol, high density lipoprotein, low density lipoprotein, very low density lipoprotein (VLDL, triglycerides, atherogenic index was determined, the level of urea, alkaline phosphatase, acid phosphatase, content of chlorides, calcium, magnesium, phosphorus, total protein, glucose, and catalase activity. Renal function was studied by the content of creatinine, cholinesterase, urea, uric acid, chlorides, potassium, sodium, calcium, phosphorus and glucose in urine. Results and discussion. The findings showed that the isolated action of EMR only led to a
Behavioral modeling and analysis of galvanic devices
Xia, Lei
2000-10-01
A new hybrid modeling approach was developed for galvanic devices including batteries and fuel cells. The new approach reduces the complexity of the First Principles method and adds a physical basis to the empirical methods. The resulting general model includes all the processes that affect the terminal behavior of the galvanic devices. The first step of the new model development was to build a physics-based structure or framework that reflects the important physiochemical processes and mechanisms of a galvanic device. Thermodynamics, electrode kinetics, mass transport and electrode interfacial structure of an electrochemical cell were considered and included in the model. Each process of the cell is represented by a clearly-defined and familiar electrical component, resulting in an equivalent circuit model for the galvanic device. The second step was to develop a parameter identification procedure that correlates the device response data to the parameters of the components in the model. This procedure eliminates the need for hard-to-find data on the electrochemical properties of the cell and specific device design parameters. Thus, the model is chemistry and structure independent. Implementation issues of the new modeling approach were presented. The validity of the new model over a wide range of operating conditions was verified with experimental data from actual devices. The new model was used in studying the characteristics of galvanic devices. Both the steady-state and dynamic behavior of batteries and fuel cells was studied using the impedance analysis techniques. The results were used to explain some experimental results of galvanic devices such as charging and pulsed discharge. The knowledge gained from the device analysis was also used in devising new solutions to application problems such as determining the state of charge of a battery or the maximum power output of a fuel cell. With the new model, a system can be designed that utilizes a galvanic device
Independent Component Analysis in Multimedia Modeling
Larsen, Jan; Hansen, Lars Kai; Kolenda, Thomas
2003-01-01
Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...... largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling...... combined text/image data for the purpose of cross-media retrieval....
Independent Component Analysis in Multimedia Modeling
Larsen, Jan
Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...... largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling...... combined text/image data for the purpose of cross-media retrieval....
Modeling and analysis of stochastic systems
Kulkarni, Vidyadhar G
2011-01-01
Based on the author's more than 25 years of teaching experience, Modeling and Analysis of Stochastic Systems, Second Edition covers the most important classes of stochastic processes used in the modeling of diverse systems, from supply chains and inventory systems to genetics and biological systems. For each class of stochastic process, the text includes its definition, characterization, applications, transient and limiting behavior, first passage times, and cost/reward models. Along with reorganizing the material, this edition revises and adds new exercises and examples. New to the second edi
Energy Systems Modelling Research and Analysis
Møller Andersen, Frits; Alberg Østergaard, Poul
2015-01-01
This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out...
Power system stability modelling, analysis and control
Sallam, Abdelhay A
2015-01-01
This book provides a comprehensive treatment of the subject from both a physical and mathematical perspective and covers a range of topics including modelling, computation of load flow in the transmission grid, stability analysis under both steady-state and disturbed conditions, and appropriate controls to enhance stability.
Stochastic Modelling and Analysis of Warehouse Operations
Y. Gong (Yeming)
2009-01-01
textabstractThis thesis has studied stochastic models and analysis of warehouse operations. After an overview of stochastic research in warehouse operations, we explore the following topics. Firstly, we search optimal batch sizes in a parallel-aisle warehouse with online order arrivals. We employ a
Comparative Distributions of Hazard Modeling Analysis
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
Formal Modeling and Analysis of Timed Systems
Larsen, Kim Guldstrand; Niebert, Peter
This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...
Modeling uncertainty in geographic information and analysis
2008-01-01
Uncertainty modeling and data quality for spatial data and spatial analyses are im-portant topics in geographic information science together with space and time in geography,as well as spatial analysis. In the past two decades,a lot of efforts have been made to research the uncertainty modeling for spatial data and analyses. This paper presents our work in the research. In particular,four progresses in the re-search are given out: (a) from determinedness-to uncertainty-based representation of geographic objects in GIS; (b) from uncertainty modeling for static data to dy-namic spatial analyses; (c) from modeling uncertainty for spatial data to models; and (d) from error descriptions to quality control for spatial data.
Semiparametric modeling and analysis of longitudinal method comparison data.
Rathnayake, Lasitha N; Choudhary, Pankaj K
2017-02-19
Studies comparing two or more methods of measuring a continuous variable are routinely conducted in biomedical disciplines with the primary goal of measuring agreement between the methods. Often, the data are collected by following a cohort of subjects over a period of time. This gives rise to longitudinal method comparison data where there is one observation trajectory for each method on every subject. It is not required that observations from all methods be available at each observation time. The multiple trajectories on the same subjects are dependent. We propose modeling the trajectories nonparametrically through penalized regression splines within the framework of mixed-effects models. The model also uses random effects of subjects and their interactions to capture dependence in observations from the same subjects. It additionally allows the within-subject errors of each method to be correlated. It is fit using the method of maximum likelihood. Agreement between the methods is evaluated by performing inference on measures of agreement, such as concordance correlation coefficient and total deviation index, which are functions of parameters of the assumed model. Simulations indicate that the proposed methodology performs reasonably well for 30 or more subjects. Its application is illustrated by analyzing a dataset of percentage body fat measurements. Copyright © 2017 John Wiley & Sons, Ltd.
Analysis hierarchical model for discrete event systems
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Biomedical model fitting and error analysis.
Costa, Kevin D; Kleinstein, Steven H; Hershberg, Uri
2011-09-20
This Teaching Resource introduces students to curve fitting and error analysis; it is the second of two lectures on developing mathematical models of biomedical systems. The first focused on identifying, extracting, and converting required constants--such as kinetic rate constants--from experimental literature. To understand how such constants are determined from experimental data, this lecture introduces the principles and practice of fitting a mathematical model to a series of measurements. We emphasize using nonlinear models for fitting nonlinear data, avoiding problems associated with linearization schemes that can distort and misrepresent the data. To help ensure proper interpretation of model parameters estimated by inverse modeling, we describe a rigorous six-step process: (i) selecting an appropriate mathematical model; (ii) defining a "figure-of-merit" function that quantifies the error between the model and data; (iii) adjusting model parameters to get a "best fit" to the data; (iv) examining the "goodness of fit" to the data; (v) determining whether a much better fit is possible; and (vi) evaluating the accuracy of the best-fit parameter values. Implementation of the computational methods is based on MATLAB, with example programs provided that can be modified for particular applications. The problem set allows students to use these programs to develop practical experience with the inverse-modeling process in the context of determining the rates of cell proliferation and death for B lymphocytes using data from BrdU-labeling experiments.
A Dynamic Model for Energy Structure Analysis
无
2006-01-01
Energy structure is a complicated system concerning economic development, natural resources, technological innovation, ecological balance, social progress and many other elements. It is not easy to explain clearly the developmental mechanism of an energy system and the mutual relations between the energy system and its related environments by the traditional methods. It is necessary to develop a suitable dynamic model, which can reflect the dynamic characteristics and the mutual relations of the energy system and its related environments. In this paper, the historical development of China's energy structure was analyzed. A new quantitative analysis model was developed based on system dynamics principles through analysis of energy resources, and the production and consumption of energy in China and comparison with the world. Finally, this model was used to predict China's future energy structures under different conditions.
Modeling and Analysis of Pulse Skip Modulation
无
2006-01-01
The state space average model and the large signal models of Pulse Skip Modulation (PSM) mode are given in this paper. Farther more, based on these models and simulations of PSM converter circuits, the analysis of the characteristics of PSM converter is described in this paper, of which include efficiency, frequency spectrum analysis, output voltage ripple, response speed and interference rejection capability. Compared with PWM control mode, PSM converter has high efficiency, especially with light loads, quick response, good interference rejection and good EMC characteristic. Improved PSM slightly, it could be a kind of good independent regulating mode during the whole operating process for a DC-DC converter. Finally, some experimental results are also presented in this paper.
Model Based Analysis of Insider Threats
Chen, Taolue; Han, Tingting; Kammueller, Florian
2016-01-01
In order to detect malicious insider attacks it is important to model and analyse infrastructures and policies of organisations and the insiders acting within them. We extend formal approaches that allow modelling such scenarios by quantitative aspects to enable a precise analysis of security...... designs. Our framework enables evaluating the risks of an insider attack to happen quantitatively. The framework first identifies an insider's intention to perform an inside attack, using Bayesian networks, and in a second phase computes the probability of success for an inside attack by this actor, using...... probabilistic model checking. We provide prototype tool support using Matlab for Bayesian networks and PRISM for the analysis of Markov decision processes, and validate the framework with case studies....
Simulation modeling and analysis with Arena
Tayfur Altiok; Benjamin Melamed [Rutgers University, NJ (United States). Department of Industrial and Systems Engineering
2007-06-15
The textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings. Chapter 13.3.3 is on coal loading operations on barges/tugboats.
Visualization and Data Analysis for CISM Models
Wiltberger, M.; Guild, T.; Lyon, J. G.
2003-12-01
The Center for Integrated Space Weather Modeling (CISM) is working on developing a model from the surface of the sun to the Earth's ionosphere. Among the many challenges facing this program is the development of visualization and data analysis package which can be used to examine the results from all of the component models. We have begun to use OpenDX as the core of the CISM visualization and data analysis package. OpenDX is an open source data visualization package based upon IBM's Data Explorer visualization software. This package allows us to provide access to simulation results through either web based front end or via a series of module extensions to the OpenDX package which can be installed on the remote user's own machine. Since the software is open source, it is freely available on wide range of platforms ranging from SGIs to Intel machines running either Linux or WinNT. The OpenDX software package includes a set of tools for turning OpenDX visual program into a Java based web page which allows the user simple control over the parameters plotted and viewing angle. We begin this presentation with an overview of OpenDX's capabilities and then present sample visualizations from the ENLIL solar wind model, the LFM and RCM magnetosphere models, and the TING ionospheric model. In addition, we illustrate how this package can be used as part of a more advanced data analysis system. In particular we examine the energy partitioning the magnetosphere during a series of substorms by using OpenDX define regions, e.g. plasma sheet, lobes, and then integrate the energy density within them as a function of time. The combination of visualization tools with data analysis routines allows us to develop a deeper understanding of the coupled Sun-Earth system.
Analysis and Realization on MIMO Channel Model
Liu Hui
2014-04-01
Full Text Available In order to build the MIMO (Multiple Input Multiple Output channel model based on IEEE 802.16, the way and analysis on how to build good MIMO channel model are described in this study. By exploiting the spatial freedom of wireless channels, MIMO systems have the potential to achieve high bandwidth efficiency, promoting MIMO to be a key technique in the next generation communication systems. As a basic researching field of MIMO technologies, MIMO channel modeling significantly serve to the performance evaluation of space-time encoding algorithms as well as system level calibration and simulation. Having the superiorities of low inner-antenna correlation and small array size, multi-polarization tends to be a promising technique in future MIMO systems. However, polarization characteristics have not yet been modeled well in current MIMO channel models, so establishing meaningful multi-polarized MIMO channel models has become a hot spot in recent channel modeling investigation. In this study, I have mainly made further research on the related theories in the channel models and channel estimation and implementation algorithms on the others’ research work.
Guideliness for system modeling: fault tree [analysis
Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong
2004-07-01
This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.
Operational modal analysis by updating autoregressive model
Vu, V. H.; Thomas, M.; Lakis, A. A.; Marcouiller, L.
2011-04-01
This paper presents improvements of a multivariable autoregressive (AR) model for applications in operational modal analysis considering simultaneously the temporal response data of multi-channel measurements. The parameters are estimated by using the least squares method via the implementation of the QR factorization. A new noise rate-based factor called the Noise rate Order Factor (NOF) is introduced for use in the effective selection of model order and noise rate estimation. For the selection of structural modes, an orderwise criterion called the Order Modal Assurance Criterion (OMAC) is used, based on the correlation of mode shapes computed from two successive orders. Specifically, the algorithm is updated with respect to model order from a small value to produce a cost-effective computation. Furthermore, the confidence intervals of each natural frequency, damping ratio and mode shapes are also computed and evaluated with respect to model order and noise rate. This method is thus very effective for identifying the modal parameters in case of ambient vibrations dealing with modern output-only modal analysis. Simulations and discussions on a steel plate structure are presented, and the experimental results show good agreement with the finite element analysis.
Boissonade, J.; De Kepper, P.
1987-07-01
We propose an interpretation of mixing effects on temporal dissipative structures in CSTR in terms of micromixing. The bistability of the ClO-2 -I- reaction is extensively discussed. The micromixing process is successively represented by a mean field model and a coalescence redispersion model, together with a realistic kinetic scheme of the reaction. The latter was found more suited to the problem. The results are in good agreement with experimental observations previously reported, accounting for shifts in the transitions from thermodynamic branch to flow branch both in premixed and nonpremixed mode. It also accounts for dynamical behavior in the vicinity of the transition, including oscillatory fluctuations. It is finally suggested that micromixing processes could induce oscillations in otherwise nonoscillating conditions if the system was perfectly homogeneous.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
Social phenomena from data analysis to models
Perra, Nicola
2015-01-01
This book focuses on the new possibilities and approaches to social modeling currently being made possible by an unprecedented variety of datasets generated by our interactions with modern technologies. This area has witnessed a veritable explosion of activity over the last few years, yielding many interesting and useful results. Our aim is to provide an overview of the state of the art in this area of research, merging an extremely heterogeneous array of datasets and models. Social Phenomena: From Data Analysis to Models is divided into two parts. Part I deals with modeling social behavior under normal conditions: How we live, travel, collaborate and interact with each other in our daily lives. Part II deals with societal behavior under exceptional conditions: Protests, armed insurgencies, terrorist attacks, and reactions to infectious diseases. This book offers an overview of one of the most fertile emerging fields bringing together practitioners from scientific communities as diverse as social sciences, p...
Computational Models for Analysis of Illicit Activities
Nizamani, Sarwat
Numerous illicit activities happen in our society, which, from time to time affect the population by harming individuals directly or indirectly. Researchers from different disciplines have contributed to developing strategies to analyze such activities, in order to help law enforcement agents....... These models include a model for analyzing evolution of terrorist networks; a text classification model for detecting suspicious text and identification of suspected authors of anonymous emails; and a semantic analysis model for news reports, which may help analyze the illicit activities in certain area...... with location and temporal information. For the network evolution, the hierarchical agglomerative clustering approach has been applied to terrorist networks as case studies. The networks' evolutions show that how individual actors who are initially isolated from each other are converted in small groups, which...
Modelling dominance in a flexible intercross analysis
Besnier Francois
2009-06-01
Full Text Available Abstract Background The aim of this paper is to develop a flexible model for analysis of quantitative trait loci (QTL in outbred line crosses, which includes both additive and dominance effects. Our flexible intercross analysis (FIA model accounts for QTL that are not fixed within founder lines and is based on the variance component framework. Genome scans with FIA are performed using a score statistic, which does not require variance component estimation. Results Simulations of a pedigree with 800 F2 individuals showed that the power of FIA including both additive and dominance effects was almost 50% for a QTL with equal allele frequencies in both lines with complete dominance and a moderate effect, whereas the power of a traditional regression model was equal to the chosen significance value of 5%. The power of FIA without dominance effects included in the model was close to those obtained for FIA with dominance for all simulated cases except for QTL with overdominant effects. A genome-wide linkage analysis of experimental data from an F2 intercross between Red Jungle Fowl and White Leghorn was performed with both additive and dominance effects included in FIA. The score values for chicken body weight at 200 days of age were similar to those obtained in FIA analysis without dominance. Conclusion We have extended FIA to include QTL dominance effects. The power of FIA was superior, or similar, to standard regression methods for QTL effects with dominance. The difference in power for FIA with or without dominance is expected to be small as long as the QTL effects are not overdominant. We suggest that FIA with only additive effects should be the standard model to be used, especially since it is more computationally efficient.
Minimalist models for proteins: a comparative analysis.
Tozzini, Valentina
2010-08-01
The last decade has witnessed a renewed interest in the coarse-grained (CG) models for biopolymers, also stimulated by the needs of modern molecular biology, dealing with nano- to micro-sized bio-molecular systems and larger than microsecond timescale. This combination of size and timescale is, in fact, hard to access by atomic-based simulations. Coarse graining the system is a route to be followed to overcome these limits, but the ways of practically implementing it are many and different, making the landscape of CG models very vast and complex. In this paper, the CG models are reviewed and their features, applications and performances compared. This analysis, restricted to proteins, focuses on the minimalist models, namely those reducing at minimum the number of degrees of freedom without losing the possibility of explicitly describing the secondary structures. This class includes models using a single or a few interacting centers (beads) for each amino acid. From this analysis several issues emerge. The difficulty in building these models resides in the need for combining transferability/predictive power with the capability of accurately reproducing the structures. It is shown that these aspects could be optimized by accurately choosing the force field (FF) terms and functional forms, and combining different parameterization procedures. In addition, in spite of the variety of the minimalist models, regularities can be found in the parameters values and in FF terms. These are outlined and schematically presented with the aid of a generic phase diagram of the polypeptide in the parameter space and, hopefully, could serve as guidelines for the development of minimalist models incorporating the maximum possible level of predictive power and structural accuracy.
Advances in statistical models for data analysis
Minerva, Tommaso; Vichi, Maurizio
2015-01-01
This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.
3D face modeling, analysis and recognition
Daoudi, Mohamed; Veltkamp, Remco
2013-01-01
3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s
Modeling and Thermal Analysis of Disc
Brake Praveena S; Lava Kumar M
2014-01-01
The disc brake is a device used for slowing or stopping the rotation of the vehicle. Number of times using the brake for vehicle leads to heat generation during braking event, such that disc brake undergoes breakage due to high Temperature. Disc brake model is done by CATIA and analysis is done by using ANSYS workbench. The main purpose of this project is to study the Thermal analysis of the Materials for the Aluminum, Grey Cast Iron, HSS M42, and HSS M2. A comparison between ...
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.
Extrudate Expansion Modelling through Dimensional Analysis Method
A new model framework is proposed to correlate extrudate expansion and extrusion operation parameters for a food extrusion cooking process through dimensional analysis principle, i.e. Buckingham pi theorem. Three dimensionless groups, i.e. energy, water content and temperature, are suggested...... to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...
Mathematical analysis of a muscle architecture model.
Navallas, Javier; Malanda, Armando; Gila, Luis; Rodríguez, Javier; Rodríguez, Ignacio
2009-01-01
Modeling of muscle architecture, which aims to recreate mathematically the physiological structure of the muscle fibers and motor units, is a powerful tool for understanding and modeling the mechanical and electrical behavior of the muscle. Most of the published models are presented in the form of algorithms, without mathematical analysis of mechanisms or outcomes of the model. Through the study of the muscle architecture model proposed by Stashuk, we present the analytical tools needed to better understand these models. We provide a statistical description for the spatial relations between motor units and muscle fibers. We are particularly concerned with two physiological quantities: the motor unit fiber number, which we expect to be proportional to the motor unit territory area; and the motor unit fiber density, which we expect to be constant for all motor units. Our results indicate that the Stashuk model is in good agreement with the physiological evidence in terms of the expectations outlined above. However, the resulting variance is very high. In addition, a considerable 'edge effect' is present in the outer zone of the muscle cross-section, making the properties of the motor units dependent on their location. This effect is relevant when motor unit territories and muscle cross-section are of similar size.
MATHEMATICAL RISK ANALYSIS: VIA NICHOLAS RISK MODEL AND BAYESIAN ANALYSIS
Anass BAYAGA
2010-07-01
Full Text Available The objective of this second part of a two-phased study was to explorethe predictive power of quantitative risk analysis (QRA method andprocess within Higher Education Institution (HEI. The method and process investigated the use impact analysis via Nicholas risk model and Bayesian analysis, with a sample of hundred (100 risk analysts in a historically black South African University in the greater Eastern Cape Province.The first findings supported and confirmed previous literature (KingIII report, 2009: Nicholas and Steyn, 2008: Stoney, 2007: COSA, 2004 that there was a direct relationship between risk factor, its likelihood and impact, certiris paribus. The second finding in relation to either controlling the likelihood or the impact of occurrence of risk (Nicholas risk model was that to have a brighter risk reward, it was important to control the likelihood ofoccurrence of risks as compared with its impact so to have a direct effect on entire University. On the Bayesian analysis, thus third finding, the impact of risk should be predicted along three aspects. These aspects included the human impact (decisions made, the property impact (students and infrastructural based and the business impact. Lastly, the study revealed that although in most business cases, where as business cycles considerably vary dependingon the industry and or the institution, this study revealed that, most impacts in HEI (University was within the period of one academic.The recommendation was that application of quantitative risk analysisshould be related to current legislative framework that affects HEI.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Bosse, M A; Arce, P
2000-03-01
The analysis described in this contribution is focused on the effect of Joule heating generation on the hydrodynamics of batch electrophoretic cells (i.e., cells that do not display a forced convective term in the motion equation). The hydrodynamics of these cells is controlled by the viscous forces and by the buoyancy force caused by the temperature gradients due to the Joule heating generation. The analysis is based on differential models that lead to analytical and/or asymptotic solutions for the temperature and velocity profiles of the cell. The results are useful in determining the characteristics of the temperature and velocity profiles inside the cell. Furthermore, the results are excellent tools to be used in the analysis of the dispersive-mixing of solute when Joule heating generation must be accounted for. The analysis is performed by identifying two sequentially coupled problems. Thus, the "carrier fluid problem" and the "solute problem" are outlined. The former is associated with all the factors affecting the velocity profile and the latter is related to the convective-diffusion aspects that control the spreading of the solute inside the cell. The analysis of this contribution is centered on the discussion of the "carrier fluid problem" only. For the boundary conditions selected in the contribution, the study leads to the derivation of an analytical temperature and a "universal" velocity profile that feature the Joule heating number. The Grashof number is a scaling factor of the actual velocity profile. Several characteristics of these profiles are studied and some numerical illustrations have been included.
3D space analysis of dental models
Chuah, Joon H.; Ong, Sim Heng; Kondo, Toshiaki; Foong, Kelvin W. C.; Yong, Than F.
2001-05-01
Space analysis is an important procedure by orthodontists to determine the amount of space available and required for teeth alignment during treatment planning. Traditional manual methods of space analysis are tedious and often inaccurate. Computer-based space analysis methods that work on 2D images have been reported. However, as the space problems in the dental arch exist in all three planes of space, a full 3D analysis of the problems is necessary. This paper describes a visualization and measurement system that analyses 3D images of dental plaster models. Algorithms were developed to determine dental arches. The system is able to record the depths of the Curve of Spee, and quantify space liabilities arising from a non-planar Curve of Spee, malalignment and overjet. Furthermore, the difference between total arch space available and the space required to arrange the teeth in ideal occlusion can be accurately computed. The system for 3D space analysis of the dental arch is an accurate, comprehensive, rapid and repeatable method of space analysis to facilitate proper orthodontic diagnosis and treatment planning.
THE FOURIER SERIES MODEL IN MAP ANALYSIS.
During the past several years the double Fourier Series has been applied to the analysis of contour-type maps as an alternative to the more commonly...used polynomial model. The double Fourier Series has high potential in the study of areal variations, inasmuch as a succession of trend maps based on...and it is shown that the double Fourier Series can be used to summarize the directional properties of areally-distributed data. An Appendix lists
Scripted Building Energy Modeling and Analysis: Preprint
Hale, E.; Macumber, D.; Benne, K.; Goldwasser, D.
2012-08-01
Building energy modeling and analysis is currently a time-intensive, error-prone, and nonreproducible process. This paper describes the scripting platform of the OpenStudio tool suite (http://openstudio.nrel.gov) and demonstrates its use in several contexts. Two classes of scripts are described and demonstrated: measures and free-form scripts. Measures are small, single-purpose scripts that conform to a predefined interface. Because measures are fairly simple, they can be written or modified by inexperienced programmers.
Energy Systems Modelling Research and Analysis
Møller Andersen, Frits; Alberg Østergaard, Poul
2015-01-01
This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out...... by 11 university and industry partners has improved the basis for decision-making within energy planning and energy scenario making by providing new and improved tools and methods for energy systems analyses....
Theme E: disabilities: analysis models and tools
Vigouroux, Nadine; Gorce, Philippe; Roby-Brami, Agnès; Rémi-Néris, Olivier
2013-01-01
International audience; This paper presents the topics and the activity of the theme E “disabilities: analysis models and tools” within the GDR STIC Santé. This group has organized a conference and a workshop during the period 2011–2012. The conference has focused on technologies for cognitive, sensory and motor impairments, assessment and use study of assistive technologies, user centered method design and the place of ethics in these research topics. The objective of “bodily integration of ...
Micromechatronics modeling, analysis, and design with Matlab
Giurgiutiu, Victor
2009-01-01
Focusing on recent developments in engineering science, enabling hardware, advanced technologies, and software, Micromechatronics: Modeling, Analysis, and Design with MATLAB®, Second Edition provides clear, comprehensive coverage of mechatronic and electromechanical systems. It applies cornerstone fundamentals to the design of electromechanical systems, covers emerging software and hardware, introduces the rigorous theory, examines the design of high-performance systems, and helps develop problem-solving skills. Along with more streamlined material, this edition adds many new sections to exist
Modeling and Thermal Analysis of Disc
Brake Praveena S
2014-10-01
Full Text Available The disc brake is a device used for slowing or stopping the rotation of the vehicle. Number of times using the brake for vehicle leads to heat generation during braking event, such that disc brake undergoes breakage due to high Temperature. Disc brake model is done by CATIA and analysis is done by using ANSYS workbench. The main purpose of this project is to study the Thermal analysis of the Materials for the Aluminum, Grey Cast Iron, HSS M42, and HSS M2. A comparison between the four materials for the Thermal values and material properties obtained from the Thermal analysis low thermal gradient material is preferred. Hence best suitable design, low thermal gradient material Grey cast iron is preferred for the Disc Brakes for better performance.
Mathematical analysis of epidemiological models with heterogeneity
Van Ark, J.W.
1992-01-01
For many diseases in human populations the disease shows dissimilar characteristics in separate subgroups of the population; for example, the probability of disease transmission for gonorrhea or AIDS is much higher from male to female than from female to male. There is reason to construct and analyze epidemiological models which allow this heterogeneity of population, and to use these models to run computer simulations of the disease to predict the incidence and prevalence of the disease. In the models considered here the heterogeneous population is separated into subpopulations whose internal and external interactions are homogeneous in the sense that each person in the population can be assumed to have all average actions for the people of that subpopulation. The first model considered is an SIRS models; i.e., the Susceptible can become Infected, and if so he eventually Recovers with temporary immunity, and after a period of time becomes Susceptible again. Special cases allow for permanent immunity or other variations. This model is analyzed and threshold conditions are given which determine whether the disease dies out or persists. A deterministic model is presented; this model is constructed using difference equations, and it has been used in computer simulations for the AIDS epidemic in the homosexual population in San Francisco. The homogeneous version and the heterogeneous version of the differential-equations and difference-equations versions of the deterministic model are analyzed mathematically. In the analysis, equilibria are identified and threshold conditions are set forth for the disease to die out if the disease is below the threshold so that the disease-free equilibrium is globally asymptotically stable. Above the threshold the disease persists so that the disease-free equilibrium is unstable and there is a unique endemic equilibrium.
Modeling and analysis of advanced binary cycles
Gawlik, K.
1997-12-31
A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.
Ontological Modeling for Integrated Spacecraft Analysis
Wicks, Erica
2011-01-01
Current spacecraft work as a cooperative group of a number of subsystems. Each of these requiresmodeling software for development, testing, and prediction. It is the goal of my team to create anoverarching software architecture called the Integrated Spacecraft Analysis (ISCA) to aid in deploying the discrete subsystems' models. Such a plan has been attempted in the past, and has failed due to the excessive scope of the project. Our goal in this version of ISCA is to use new resources to reduce the scope of the project, including using ontological models to help link the internal interfaces of subsystems' models with the ISCA architecture.I have created an ontology of functions specific to the modeling system of the navigation system of a spacecraft. The resulting ontology not only links, at an architectural level, language specificinstantiations of the modeling system's code, but also is web-viewable and can act as a documentation standard. This ontology is proof of the concept that ontological modeling can aid in the integration necessary for ISCA to work, and can act as the prototype for future ISCA ontologies.
Model reduction using a posteriori analysis
Whiteley, Jonathan P.
2010-05-01
Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.
Mode analysis of numerical geodynamo models
Schrinner, Martin; Hoyng, Peter
2011-01-01
It has been suggested in Hoyng (2009) that dynamo action can be analysed by expansion of the magnetic field into dynamo modes and statistical evaluation of the mode coefficients. We here validate this method by analysing a numerical geodynamo model and comparing the numerically derived mean mode coefficients with the theoretical predictions. The model belongs to the class of kinematically stable dynamos with a dominating axisymmetric, antisymmetric with respect to the equator and non-periodic fundamental dynamo mode. The analysis requires a number of steps: the computation of the so-called dynamo coefficients, the derivation of the temporally and azimuthally averaged dynamo eigenmodes and the decomposition of the magnetic field of the numerical geodynamo model into the eigenmodes. For the determination of the theoretical mode excitation levels the turbulent velocity field needs to be projected on the dynamo eigenmodes. We compare the theoretically and numerically derived mean mode coefficients and find reason...
Topological data analysis of biological aggregation models.
Topaz, Chad M; Ziegelmeier, Lori; Halverson, Tom
2015-01-01
We apply tools from topological data analysis to two mathematical models inspired by biological aggregations such as bird flocks, fish schools, and insect swarms. Our data consists of numerical simulation output from the models of Vicsek and D'Orsogna. These models are dynamical systems describing the movement of agents who interact via alignment, attraction, and/or repulsion. Each simulation time frame is a point cloud in position-velocity space. We analyze the topological structure of these point clouds, interpreting the persistent homology by calculating the first few Betti numbers. These Betti numbers count connected components, topological circles, and trapped volumes present in the data. To interpret our results, we introduce a visualization that displays Betti numbers over simulation time and topological persistence scale. We compare our topological results to order parameters typically used to quantify the global behavior of aggregations, such as polarization and angular momentum. The topological calculations reveal events and structure not captured by the order parameters.
Bizzotto, Roberto; zamuner, stefano; Mezzalana, Enrica; De Nicolao, Giuseppe; Gomeni, Roberto; Hooker, Andrew C; Karlsson, Mats O.
2011-01-01
Mixed-effect Markov chain models have been recently proposed to characterize the time course of transition probabilities between sleep stages in insomniac patients. The most recent one, based on multinomial logistic functions, was used as a base to develop a final model combining the strengths of the existing ones. This final model was validated on placebo data applying also new diagnostic methods and then used for the inclusion of potential age, gender, and BMI effects. Internal validation w...
[Comparison of two spectral mixture analysis models].
Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wang, Li-Ming
2009-10-01
A spectral mixture analysis experiment was designed to compare the spectral unmixing effects of linear spectral mixture analysis (LSMA) and constraint linear spectral mixture analysis (CLSMA). In the experiment, red, green, blue and yellow colors were printed on a coarse album as four end members. Thirty nine mixed samples were made according to each end member's different percent in one pixel. Then, field spectrometer was located on the top of the mixed samples' center to measure spectrum one by one. Inversion percent of each end member in the pixel was extracted using LSMA and CLSMA models. Finally, normalized mean squared error was calculated between inversion and real percent to compare the two models' effects on spectral unmixing. Results from experiment showed that the total error of LSMA was 0.30087 and that of CLSMA was 0.37552 when using all bands in the spectrum. Therefore, LSMA was 0.075 less than that of CLSMA when the whole bands of four end members' spectra were used. On the other hand, the total error of LSMA was 0.28095 and that of CLSMA was 0.29805 after band selection. So, LSMA was 0.017 less than that of CLSMA when bands selection was performed. Therefore, whether all or selected bands were used, the accuracy of LSMA was better than that of CLSMA because during the process of spectrum measurement, errors caused by instrument or human were introduced into the model, leading to that the measured data could not mean the strict requirement of CLSMA and therefore reduced its accuracy: Furthermore, the total error of LSMA using selected bands was 0.02 less than that using the whole bands. The total error of CLSMA using selected bands was 0.077 less than that using the whole bands. So, in the same model, spectral unmixing using selected bands to reduce the correlation of end members' spectra was superior to that using the whole bands.
Automating Risk Analysis of Software Design Models
Maxime Frydman
2014-01-01
Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
Gentrification and models for real estate analysis
Gianfranco Brusa
2013-08-01
Full Text Available This research propose a deep analysis of Milanese real estate market, based on data supplied by three real estate organizations; gentrification appears in some neighborhoods, such as Tortona, Porta Genova, Bovisa, Isola Garibaldi: the latest is the subject of the final analysis, by surveying of physical and social state of the area. The survey takes place in two periods (2003 and 2009 to compare the evolution of gentrification. The results of surveys has been employed in a simulation by multi-agent system model, to foresee long term evolution of the phenomenon. These neighborhood micro-indicators allow to put in evidence actual trends, conditioning a local real estate market, which can translate themselves in phenomena such as gentrification. In present analysis, the employ of cellular automata models applied to a neighborhood in Milan (Isola Garibaldi produced the dynamic simulation of gentrification trend during a very long time: the cyclical phenomenon (one loop holds a period of twenty – thirty years appears sometimes during a theoretical time of 100 – 120 – 150 years. Simulation of long period scenarios by multi-agent systems and cellular automata provides estimator with powerful tool, without limits in implementing it, able to support him in appraisal judge. It stands also to reason that such a tool can sustain urban planning and related evaluation processes.
Microblog Sentiment Analysis with Emoticon Space Model
姜飞; 刘奕群; 孙甲申; 朱璇; 张敏; 马少平
2015-01-01
Emoticons have been widely employed to express different types of moods, emotions, and feelings in microblog environments. They are therefore regarded as one of the most important signals for microblog sentiment analysis. Most existing studies use several emoticons that convey clear emotional meanings as noisy sentiment labels or similar sentiment indicators. However, in practical microblog environments, tens or even hundreds of emoticons are frequently adopted and all emoticons have their own unique emotional meanings. Besides, a considerable number of emoticons do not have clear emotional meanings. An improved sentiment analysis model should not overlook these phenomena. Instead of manually assigning sentiment labels to several emoticons that convey relatively clear meanings, we propose the emoticon space model (ESM) that leverages more emoticons to construct word representations from a massive amount of unlabeled data. By projecting words and microblog posts into an emoticon space, the proposed model helps identify subjectivity, polarity, and emotion in microblog environments. The experimental results for a public microblog benchmark corpus (NLP&CC 2013) indicate that ESM effectively leverages emoticon signals and outperforms previous state-of-the-art strategies and benchmark best runs.
Tang, Xiao; Zhu, Jiang; Wang, ZiFa; Gbaguidi, Alex; Lin, CaiYan; Xin, JinYuan; Song, Tao; Hu, Bo
2016-05-01
This study investigates a cross-variable ozone data assimilation (DA) method based on an ensemble Kalman filter (EnKF) that has been used in the companion study to improve ozone forecasts over Beijing and surrounding areas. The main purpose is to delve into the impacts of the cross-variable adjustment of nitrogen oxide (NOx) emissions on the nitrogen dioxide (NO2) forecasts over this region during the 2008 Beijing Olympic Games. A mixed effect on the NO2 forecasts was observed through application of the cross-variable assimilation approach in the real-data assimilation (RDA) experiments. The method improved the NO2 forecasts over almost half of the urban sites with reductions of the root mean square errors (RMSEs) by 15-36 % in contrast to big increases of the RMSEs over other urban stations by 56-239 %. Over the urban stations with negative DA impacts, improvement of the NO2 forecasts (with 7 % reduction of the RMSEs) was noticed at night and in the morning versus significant deterioration during daytime (with 190 % increase of the RMSEs), suggesting that the negative data assimilation impacts mainly occurred during daytime. Ideal-data assimilation (IDA) experiments with a box model and the same cross-variable assimilation method confirmed the mixed effects found in the RDA experiments. In the same way, NOx emission estimation was improved at night and in the morning even under large biases in the prior emission, while it deteriorated during daytime (except for the case of minor errors in the prior emission). The mixed effects observed in the cross-variable data assimilation, i.e., positive data assimilation impacts on NO2 forecasts over some urban sites, negative data assimilation impacts over the other urban sites, and weak data assimilation impacts over suburban sites, highlighted the limitations of the EnKF under strong nonlinear relationships between chemical variables. Under strong nonlinearity between daytime ozone concentrations and NOx emissions
Data Logistics and the CMS Analysis Model
Managan, Julie E
2009-01-01
The Compact Muon Solenoid Experiment (CMS) at the Large Hadron Collider (LHC) at CERN has brilliant prospects for uncovering new information about the physical structure of our universe. Soon physicists around the world will participate together in analyzing CMS data in search of new physics phenomena and the Higgs Boson. However, they face a significant problem: with 5 Petabytes of data needing distribution each year, how will physicists get the data they need? How and where will they be able to analyze it? Computing resources and scientists are scattered around the world, while CMS data exists in localized chunks. The CMS computing model only allows analysis of locally stored data, “tethering” analysis to storage. The Vanderbilt CMS team is actively working to solve this problem with the Research and Education Data Depot Network (REDDnet), a program run by Vanderbilt’s Advanced Computing Center for Research and Education (ACCRE). The Compact Muon Solenoid Experiment (CMS) at the Large Hadron Collider ...
Spatiochromatic Context Modeling for Color Saliency Analysis.
Zhang, Jun; Wang, Meng; Zhang, Shengping; Li, Xuelong; Wu, Xindong
2016-06-01
Visual saliency is one of the most noteworthy perceptual abilities of human vision. Recent progress in cognitive psychology suggests that: 1) visual saliency analysis is mainly completed by the bottom-up mechanism consisting of feedforward low-level processing in primary visual cortex (area V1) and 2) color interacts with spatial cues and is influenced by the neighborhood context, and thus it plays an important role in a visual saliency analysis. From a computational perspective, the most existing saliency modeling approaches exploit multiple independent visual cues, irrespective of their interactions (or are not computed explicitly), and ignore contextual influences induced by neighboring colors. In addition, the use of color is often underestimated in the visual saliency analysis. In this paper, we propose a simple yet effective color saliency model that considers color as the only visual cue and mimics the color processing in V1. Our approach uses region-/boundary-defined color features with spatiochromatic filtering by considering local color-orientation interactions, therefore captures homogeneous color elements, subtle textures within the object and the overall salient object from the color image. To account for color contextual influences, we present a divisive normalization method for chromatic stimuli through the pooling of contrary/complementary color units. We further define a color perceptual metric over the entire scene to produce saliency maps for color regions and color boundaries individually. These maps are finally globally integrated into a one single saliency map. The final saliency map is produced by Gaussian blurring for robustness. We evaluate the proposed method on both synthetic stimuli and several benchmark saliency data sets from the visual saliency analysis to salient object detection. The experimental results demonstrate that the use of color as a unique visual cue achieves competitive results on par with or better than 12 state
Modelling and analysis of global coal markets
Trueby, Johannes
2013-01-17
The thesis comprises four interrelated essays featuring modelling and analysis of coal markets. Each of the four essays has a dedicated chapter in this thesis. Chapters 2 to 4 have, from a topical perspective, a backward-looking focus and deal with explaining recent market outcomes in the international coal trade. The findings of those essays may serve as guidance for assessing current coal market outcomes as well as expected market outcomes in the near to medium-term future. Chapter 5 has a forward-looking focus and builds a bridge between explaining recent market outcomes and projecting long-term market equilibria. Chapter 2, Strategic Behaviour in International Metallurgical Coal Markets, deals with market conduct of large exporters in the market of coals used in steel-making in the period 2008 to 2010. In this essay I analyse whether prices and trade-flows in the international market for metallurgical coals were subject to non-competitive conduct in the period 2008 to 2010. To do so, I develop mathematical programming models - a Stackelberg model, two varieties of a Cournot model, and a perfect competition model - for computing spatial equilibria in international resource markets. Results are analysed with various statistical measures to assess the prediction accuracy of the models. The results show that real market equilibria cannot be reproduced with a competitive model. However, real market outcomes can be accurately simulated with the non-competitive models, suggesting that market equilibria in the international metallurgical coal trade were subject to the strategic behaviour of coal exporters. Chapter 3 and chapter 4 deal with market power issues in the steam coal trade in the period 2006 to 2008. Steam coals are typically used to produce steam either for electricity generation or for heating purposes. In Chapter 3 we analyse market behaviour of key exporting countries in the steam coal trade. This chapter features the essay Market Structure Scenarios in
MODELING ANALYSIS FOR GROUT HOPPER WASTE TANK
Lee, S.
2012-01-04
The Saltstone facility at Savannah River Site (SRS) has a grout hopper tank to provide agitator stirring of the Saltstone feed materials. The tank has about 300 gallon capacity to provide a larger working volume for the grout nuclear waste slurry to be held in case of a process upset, and it is equipped with a mechanical agitator, which is intended to keep the grout in motion and agitated so that it won't start to set up. The primary objective of the work was to evaluate the flow performance for mechanical agitators to prevent vortex pull-through for an adequate stirring of the feed materials and to estimate an agitator speed which provides acceptable flow performance with a 45{sup o} pitched four-blade agitator. In addition, the power consumption required for the agitator operation was estimated. The modeling calculations were performed by taking two steps of the Computational Fluid Dynamics (CFD) modeling approach. As a first step, a simple single-stage agitator model with 45{sup o} pitched propeller blades was developed for the initial scoping analysis of the flow pattern behaviors for a range of different operating conditions. Based on the initial phase-1 results, the phase-2 model with a two-stage agitator was developed for the final performance evaluations. A series of sensitivity calculations for different designs of agitators and operating conditions have been performed to investigate the impact of key parameters on the grout hydraulic performance in a 300-gallon hopper tank. For the analysis, viscous shear was modeled by using the Bingham plastic approximation. Steady state analyses with a two-equation turbulence model were performed. All analyses were based on three-dimensional results. Recommended operational guidance was developed by using the basic concept that local shear rate profiles and flow patterns can be used as a measure of hydraulic performance and spatial stirring. Flow patterns were estimated by a Lagrangian integration technique along
MODELING ANALYSIS FOR GROUT HOPPER WASTE TANK
Lee, S.
2012-01-04
The Saltstone facility at Savannah River Site (SRS) has a grout hopper tank to provide agitator stirring of the Saltstone feed materials. The tank has about 300 gallon capacity to provide a larger working volume for the grout nuclear waste slurry to be held in case of a process upset, and it is equipped with a mechanical agitator, which is intended to keep the grout in motion and agitated so that it won't start to set up. The primary objective of the work was to evaluate the flow performance for mechanical agitators to prevent vortex pull-through for an adequate stirring of the feed materials and to estimate an agitator speed which provides acceptable flow performance with a 45{sup o} pitched four-blade agitator. In addition, the power consumption required for the agitator operation was estimated. The modeling calculations were performed by taking two steps of the Computational Fluid Dynamics (CFD) modeling approach. As a first step, a simple single-stage agitator model with 45{sup o} pitched propeller blades was developed for the initial scoping analysis of the flow pattern behaviors for a range of different operating conditions. Based on the initial phase-1 results, the phase-2 model with a two-stage agitator was developed for the final performance evaluations. A series of sensitivity calculations for different designs of agitators and operating conditions have been performed to investigate the impact of key parameters on the grout hydraulic performance in a 300-gallon hopper tank. For the analysis, viscous shear was modeled by using the Bingham plastic approximation. Steady state analyses with a two-equation turbulence model were performed. All analyses were based on three-dimensional results. Recommended operational guidance was developed by using the basic concept that local shear rate profiles and flow patterns can be used as a measure of hydraulic performance and spatial stirring. Flow patterns were estimated by a Lagrangian integration technique along
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Environmental modeling framework invasiveness: analysis and implications
Environmental modeling frameworks support scientific model development by providing an Application Programming Interface (API) which model developers use to implement models. This paper presents results of an investigation on the framework invasiveness of environmental modeling frameworks. Invasiven...
The mixed effects of migration: community-level migration and birthweight in Mexico.
Hamilton, Erin R; Choi, Kate H
2015-05-01
Research on the relationship between migration and infant health in Mexico finds that migration has mixed impacts on the risk of low birthweight (LBW). Whereas the departure and absence of household and community members are harmful, remittances are beneficial. We extend this work by considering a different measure of infant health in addition to LBW: macrosomia (i.e., heavy birthweight), which is associated with infant, child, and maternal morbidities but has a different social risk profile from LBW. We link the 2008 and 2009 Mexican birth certificates with community data from the 2000 Mexican census to analyze the association between various dimensions of community-level migration (i.e., rates of out-migration, receipt of remittances, and return migration) and the risk of LBW and macrosomia. We examine this association using two sets of models which differ in the extent to which they account for endogeneity. We find that the health impacts of migration differ depending not only on the dimension of migration, but also on the measure of health, and that they are robust to potential sources of endogeneity. Whereas community remittances and return migration are associated with lower risk of LBW, they are associated with increased risk of macrosomia. By contrast, out-migration is associated with increased risk of LBW and lower risk of macrosomia. Our analysis of endogeneity suggests that bias resulting from unmeasured differences between communities with different levels of migration may result in an underestimate of the impacts of community migration on birthweight.
Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review
Antonsson, Erik; Gombosi, Tamas
2005-01-01
Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.
Visual behaviour analysis and driver cognitive model
Baujon, J.; Basset, M.; Gissinger, G.L. [Mulhouse Univ., (France). MIPS/MIAM Lab.
2001-07-01
Recent studies on driver behaviour have shown that perception - mainly visual but also proprioceptive perception - plays a key role in the ''driver-vehicle-road'' system and so considerably affects the driver's decision making. Within the framework of the behaviour analysis and studies low-cost system (BASIL), this paper presents a correlative, qualitative and quantitative study, comparing the information given by visual perception and by the trajectory followed. This information will help to obtain a cognitive model of the Rasmussen type according to different driver classes. Many experiments in real driving situations have been carried out for different driver classes and for a given trajectory profile, using a test vehicle and innovative, specially designed, real-time tools, such as the vision system or the positioning module. (orig.)
Structured analysis and modeling of complex systems
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Modeling and Exergy Analysis of District Cooling
Nguyen, Chan
. As a principle example, the CO2 emission for each of the cooling and heating consumer is found. The conclusion is analogue to the exergy costing method, i.e. the exergoenvironmmental method can be used as motivation for reducing CO2 emission. One of the main obstacles with district cooling in a traditional water......In this thesis energy, exergy and exergoeconomic analysis has been carried out on a different number of co-generation energy systems involving cooling. The models and methods developed can be used as a frame work to improve the district heating and cooling system thermodynamically and...... in a district heating system based on combined heat and power plants (CHP). A theoretical comparison of trigeneration (cooling, heating and electricity) systems, a traditional system and a recovery system is carried out. The comparison is based on the systems overall exergy efficiency. The traditional system...
Early Start DENVER Model: A Meta - analysis
Jane P. Canoy
2015-11-01
Full Text Available Each child with Autism Spectrum Disorder has different symptoms, skills and types of impairment or disorder with other children. This is why the word “spectrum” is included in this disorder. Eapen, Crncec, and Walter, 2013 claimed that there was an emerging evidence that early interventions gives the greatest capacity of child’s development during their first years of life as “brain plasticity” are high during this period. With this, the only intervention program model for children as young as 18 months that has been validated in a randomized clinical trial is “Early Start Denver Model” (ESDM. This study aimed to determine the effectiveness of the outcome of “Early Start Denver Model” (ESDM towards young children with Autism Spectrum Disorders. This study made use of meta-analysis method. In this study, the researcher utilized studies related to “Early Start Denver Model (ESDM” which is published in a refereed journal which are all available online. There were five studies included which totals 149 children exposed to ESDM. To examine the “pooled effects” of ESDM in a variety of outcomes, a meta-analytic procedure was performed after the extraction of data of the concrete outcomes. Comprehensive Meta Analysis Version 3.3.070 was used to analyze the data. The effectiveness of the outcome of “Early Start Denver Model” towards young children with Autism Spectrum Disorders (ASD highly depends on the intensity of intervention and the younger child age. This study would provide the basis in effectively implementing an early intervention to children with autism such as the “Early Start Denver Model” (ESDM that would show great outcome effects to those children that has “Autism Spectrum Disorder”.
Saturn Ring Data Analysis and Thermal Modeling
Dobson, Coleman
2011-01-01
CIRS, VIMS, UVIS, and ISS (Cassini's Composite Infrared Specrtometer, Visual and Infrared Mapping Spectrometer, Ultra Violet Imaging Spectrometer and Imaging Science Subsystem, respectively), have each operated in a multidimensional observation space and have acquired scans of the lit and unlit rings at multiple phase angles. To better understand physical and dynamical ring particle parametric dependence, we co-registered profiles from these three instruments, taken at a wide range of wavelengths, from ultraviolet through the thermal infrared, to associate changes in ring particle temperature with changes in observed brightness, specifically with albedos inferred by ISS, UVIS and VIMS. We work in a parameter space where the solar elevation range is constrained to 12 deg - 14 deg and the chosen radial region is the B3 region of the B ring; this region is the most optically thick region in Saturn's rings. From this compilation of multiple wavelength data, we construct and fit phase curves and color ratios using independent dynamical thermal models for ring structure and overplot Saturn, Saturn ring, and Solar spectra. Analysis of phase curve construction and color ratios reveals thermal emission to fall within the extrema of the ISS bandwidth and a geometrical dependence of reddening on phase angle, respectively. Analysis of spectra reveals Cassini CIRS Saturn spectra dominate Cassini CIRS B3 Ring Spectra from 19 to 1000 microns, while Earth-based B Ring Spectrum dominates Earth-based Saturn Spectrum from 0.4 to 4 microns. From our fits we test out dynamical thermal models; from the phase curves we derive ring albedos and non-lambertian properties of the ring particle surfaces; and from the color ratios we examine multiple scattering within the regolith of ring particles.
2016-06-01
ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK... ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK...to model-based systems engineering (MBSE) by formally defining an MBSE methodology for employing architecture in system analysis (MEASA) that presents
Modeling for Deformable Body and Motion Analysis: A Review
Hailang Pan
2013-01-01
Full Text Available This paper surveys the modeling methods for deformable human body and motion analysis in the recent 30 years. First, elementary knowledge of human expression and modeling is introduced. Then, typical human modeling technologies, including 2D model, 3D surface model, and geometry-based, physics-based, and anatomy-based approaches, and model-based motion analysis are summarized. Characteristics of these technologies are analyzed. The technology accumulation in the field is outlined for an overview.
Development of hydrogen combustion analysis model
Lim, Tae Jin; Lee, K. D.; Kim, S. N. [Soongsil University, Seoul (Korea, Republic of); Hong, J. S.; Kwon, H. Y. [Seoul National Polytechnic University, Seoul (Korea, Republic of); Kim, Y. B.; Kim, J. S. [Seoul National University, Seoul (Korea, Republic of)
1997-07-01
The objectives of this project is to construct a credible DB for component reliability by developing methodologies and computer codes for assessing component independent failure and common cause failure probability, incorporating applicability and dependency of the data. In addition to this, the ultimate goal is to systematize all the analysis procedures so as to provide plans for preventing component failures by employing flexible tools for the change of specific plant or data sources. For the first subject, we construct a DB for similarity index and dependence matrix and propose a systematic procedure for data analysis by investigating the similarity and redundancy of the generic data sources. Next, we develop a computer code for this procedure and construct reliability data base for major components. The second subject is focused on developing CCF procedure for assessing the plant specific defense ability, rather than developing another CCF model. We propose a procedure and computer code for estimating CCF event probability by incorporating plant specific defensive measure. 116 refs., 25 tabs., 24 figs. (author)
Production TTR modeling and dynamic buckling analysis
Hugh Liu; John Wei; Edward Huang
2013-01-01
In a typical tension leg platform (TLP) design,the top tension factor (TTF),measuring the top tension of a top tensioned riser (TTR) relative to its submerged weight in water,is one of the most important design parameters that has to be specified properly.While a very small TTF may lead to excessive vortex induced vibration (ⅤⅣ),clashing issues and possible compression close to seafloor,an unnecessarily high TTF may translate into excessive riser cost and vessel payload,and even has impacts on the TLP sizing and design in general.In the process of a production TTR design,it is found that its outer casing can be subjected to compression in a worst-case scenario with some extreme metocean and hardware conditions.The present paper shows how finite element analysis (FEA) models using beam elements and two different software packages (Flexcom and ABAQUS) are constructed to simulate the TTR properly,and especially the pipe-in-pipe effects.An ABAQUS model with hybrid elements (beam elements globally + shell elements locally) can be used to investigate how the outer casing behaves under compression.It is shown for the specified TTR design,even with its outer casing being under some local compression in the worst-case scenario,dynamic buckling would not occur; therefore the TTR design is adequate.
Tradeoff Analysis for Optimal Multiobjective Inventory Model
Longsheng Cheng
2013-01-01
Full Text Available Deterministic inventory model, the economic order quantity (EOQ, reveals that carrying inventory or ordering frequency follows a relation of tradeoff. For probabilistic demand, the tradeoff surface among annual order, expected inventory and shortage are useful because they quantify what the firm must pay in terms of ordering workload and inventory investment to meet the customer service desired. Based on a triobjective inventory model, this paper employs the successive approximation to obtain efficient control policies outlining tradeoffs among conflicting objectives. The nondominated solutions obtained by successive approximation are further used to plot a 3D scatterplot for exploring the relationships between objectives. Visualization of the tradeoffs displayed by the scatterplots justifies the computation effort done in the experiment, although several iterations needed to reach a nondominated solution make the solution procedure lengthy and tedious. Information elicited from the inverse relationships may help managers make deliberate inventory decisions. For the future work, developing an efficient and effective solution procedure for tradeoff analysis in multiobjective inventory management seems imperative.
Talking Cure Models: A Framework of Analysis
Marx, Christopher; Benecke, Cord; Gumz, Antje
2017-01-01
Psychotherapy is commonly described as a “talking cure,” a treatment method that operates through linguistic action and interaction. The operative specifics of therapeutic language use, however, are insufficiently understood, mainly due to a multitude of disparate approaches that advance different notions of what “talking” means and what “cure” implies in the respective context. Accordingly, a clarification of the basic theoretical structure of “talking cure models,” i.e., models that describe therapeutic processes with a focus on language use, is a desideratum of language-oriented psychotherapy research. Against this background the present paper suggests a theoretical framework of analysis which distinguishes four basic components of “talking cure models”: (1) a foundational theory (which suggests how linguistic activity can affect and transform human experience), (2) an experiential problem state (which defines the problem or pathology of the patient), (3) a curative linguistic activity (which defines linguistic activities that are supposed to effectuate a curative transformation of the experiential problem state), and (4) a change mechanism (which defines the processes and effects involved in such transformations). The purpose of the framework is to establish a terminological foundation that allows for systematically reconstructing basic properties and operative mechanisms of “talking cure models.” To demonstrate the applicability and utility of the framework, five distinct “talking cure models” which spell out the details of curative “talking” processes in terms of (1) catharsis, (2) symbolization, (3) narrative, (4) metaphor, and (5) neurocognitive inhibition are introduced and discussed in terms of the framework components. In summary, we hope that our framework will prove useful for the objective of clarifying the theoretical underpinnings of language-oriented psychotherapy research and help to establish a more comprehensive
Linking advanced fracture models to structural analysis
Chiesa, Matteo
2001-07-01
Shell structures with defects occur in many situations. The defects are usually introduced during the welding process necessary for joining different parts of the structure. Higher utilization of structural materials leads to a need for accurate numerical tools for reliable prediction of structural response. The direct discretization of the cracked shell structure with solid finite elements in order to perform an integrity assessment of the structure in question leads to large size problems, and makes such analysis infeasible in structural application. In this study a link between local material models and structural analysis is outlined. An ''ad hoc'' element formulation is used in order to connect complex material models to the finite element framework used for structural analysis. An improved elasto-plastic line spring finite element formulation, used in order to take cracks into account, is linked to shell elements which are further linked to beam elements. In this way one obtain a global model of the shell structure that also accounts for local flexibilities and fractures due to defects. An important advantage with such an approach is a direct fracture mechanics assessment e.g. via computed J-integral or CTOD. A recent development in this approach is the notion of two-parameter fracture assessment. This means that the crack tip stress tri-axiality (constraint) is employed in determining the corresponding fracture toughness, giving a much more realistic capacity of cracked structures. The present thesis is organized in six research articles and an introductory chapter that reviews important background literature related to this work. Paper I and II address the performance of shell and line spring finite elements as a cost effective tool for performing the numerical calculation needed to perform a fracture assessment. In Paper II a failure assessment, based on the testing of a constraint-corrected fracture mechanics specimen under tension, is
Comparison of Statistical Models for Regional Crop Trial Analysis
ZHANG Qun-yuan; KONG Fan-ling
2002-01-01
Based on the review and comparison of main statistical analysis models for estimating varietyenvironment cell means in regional crop trials, a new statistical model, LR-PCA composite model was proposed, and the predictive precision of these models were compared by cross validation of an example data. Results showed that the order of model precision was LR-PCA model ＞ AMMI model ＞ PCA model ＞ Treatment Means (TM) model ＞ Linear Regression (LR) model ＞ Additive Main Effects ANOVA model. The precision gain factor of LR-PCA model was 1.55, increasing by 8.4% compared with AMMI.
Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis
2006-01-01
Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis by Mostafiz R. Chowdhury and Ala Tabiei ARL-TR-3703...Adelphi, MD 20783-1145 ARL-TR-3703 January 2006 Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis...GRANT NUMBER 4. TITLE AND SUBTITLE Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis 5c. PROGRAM
[Tuscan Chronic Care Model: a preliminary analysis].
Barbato, Angelo; Meggiolaro, Angela; Rossi, Luigi; Fioravanti, C; Palermita, F; La Torre, Giuseppe
2015-01-01
the aim of this study is to present a preliminary analysis of efficacy and effectiveness of a model of chronically ill care (Chronic Care Model, CCM). the analysis took into account 106 territorial modules, 1016 General Practitioners and 1,228,595 patients. The diagnostic and therapeutic pathways activated (PDTA), involved four chronic conditions, selected according to the prevalence and incidence, in Tuscany Region: Diabetes Mellitus (DM), Heart Failure (SC), Chronic Obstructive Pulmonary Disease (COPD) and stroke. Six epidemiological indicators of process and output were selected, in order to measure the model of care performed, before and after its application: adherence to specific follow-up for each pathology (use of clinical and laboratory indicators), annual average of expenditure per/capita/euro for diagnostic tests, in laboratory and instrumental, average expenditure per/capita/year for specialist visits; hospitalization rate for diseases related to the main pathology, hospitalization rate for long-term complications and rate of access to the emergency department (ED). Data were collected through the database; the differences before and after the intervention and between exposed and unexposed, were analyzed by method "Before-After (Controlled and Uncontrolled) Studies". The impact of the intervention was calculated as DD (difference of the differences). DM management showed an increased adhesion to follow-up (DD: +8.1%), and the use of laboratory diagnostics (DD: +4,9 €/year/pc), less hospitalization for long-term complications and for endocrine related diseases (DD respectively: 5.8/1000 and DD: +1.2/1000), finally a smaller increase of access to PS (DD: -1.6/1000), despite a slight increase of specialistic visits (DD: +0,38 €/year/pc). The management of SC initially showed a rising adherence to follow-up (DD: +2.3%), a decrease of specialist visits (DD:E 1.03 €/year/pc), hospitalization and access to PS for exacerbations (DD: -4.4/1000 and DD: -6
Model Based Analysis and Test Generation for Flight Software
Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep
2009-01-01
We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.
Modeling and analysis of solar distributed generation
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum
van Riel, Natal A W
2006-12-01
Systems biology applies quantitative, mechanistic modelling to study genetic networks, signal transduction pathways and metabolic networks. Mathematical models of biochemical networks can look very different. An important reason is that the purpose and application of a model are essential for the selection of the best mathematical framework. Fundamental aspects of selecting an appropriate modelling framework and a strategy for model building are discussed. Concepts and methods from system and control theory provide a sound basis for the further development of improved and dedicated computational tools for systems biology. Identification of the network components and rate constants that are most critical to the output behaviour of the system is one of the major problems raised in systems biology. Current approaches and methods of parameter sensitivity analysis and parameter estimation are reviewed. It is shown how these methods can be applied in the design of model-based experiments which iteratively yield models that are decreasingly wrong and increasingly gain predictive power.
Applied data analysis and modeling for energy engineers and scientists
Reddy, T Agami
2011-01-01
""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and
Molten carbonate fuel cells. Modeling, analysis, simulation, and control
Sundmacher, K.; Kienle, A. [Max-Planck-Institut fuer Dynamik Komplexer Technischer Systeme, Magdeburg (Germany); Pesch, H.J. [Bayreuth Univ. (Germany). Lehrstuhl fuer Ingenieurmathematik; Berndt, J.F. [IPF Beteiligungsgesellschaft Berndt KG, Reilingen (Germany); Huppmann, G. (eds.) [MTU CFC Solutions GmbH, Muenchen (Germany)
2007-07-01
This book presents model-based concepts for process analysis and control on a generalized basis. It is structured as follows: Part I - DESIGN AND OPERATION: MTU's Carbonate Fuel Cell HotModule; Operational Experiences. Part II - MODEL-BASED PROCESS ANALYSIS: MCFC Reference Model; Index Analysis of Models; Parameter Identification; Steady State Process Analysis; Hot spot formation and steady state multiplicities; Conceptual design an Reforming concepts. Part III - OPTIMIZATION AND ADVANCED CONTROL: Model reduction and State estimation; Optimal Control Strategies; Optimization of Reforming Catalyst Distribution.
Analysis on the Logarithmic Model of Relationships
无
2005-01-01
The logarithmic model is often used to describe the relationships between factors.It often gives good statistical characteristics.Yet,in the process of modeling of soil and water conservation,we find out that this“good”model cannot guarantee good result.In this paper we make an inquiry into the intrinsic reasons.It is shown that the logarithmic model has the property of enlarging or reducing model errors,and the disadvantages of the logarithmic model are analyzed.
Model performance analysis and model validation in logistic regression
Rosa Arboretti Giancristofaro
2007-10-01
Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.
Likelihood analysis of the minimal AMSB model
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Integration of Design and Control through Model Analysis
Russel, Boris Mariboe; Henriksen, Jens Peter; Jørgensen, Sten Bay;
2002-01-01
A systematic computer aided analysis of the process model is proposed as a pre-solution step for integration of design and control problems. The process model equations are classified in terms of balance equations, constitutive equations and conditional equations. Analysis of the phenomena models...... (structure selection) issues for the integrated problems are considered. (C) 2002 Elsevier Science Ltd. All rights reserved....
Evaluation of RCAS Inflow Models for Wind Turbine Analysis
Tangler, J.; Bir, G.
2004-02-01
The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
Model Theory in Algebra, Analysis and Arithmetic
Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J
2014-01-01
Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics.
An Extended Analysis of Requirements Traceability Model
Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu
2004-01-01
A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.
Managing Analysis Models in the Design Process
Briggs, Clark
2006-01-01
Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.
Loss Given Default Modelling: Comparative Analysis
Yashkir, Olga; Yashkir, Yuriy
2013-01-01
In this study we investigated several most popular Loss Given Default (LGD) models (LSM, Tobit, Three-Tiered Tobit, Beta Regression, Inflated Beta Regression, Censored Gamma Regression) in order to compare their performance. We show that for a given input data set, the quality of the model calibration depends mainly on the proper choice (and availability) of explanatory variables (model factors), but not on the fitting model. Model factors were chosen based on the amplitude of their correlati...
Wang, Xiong; Zhu, Yadong; Zhou, Pu; Wang, Xiaolin; Xiao, Hu; Si, Lei
2013-11-04
We propose and demonstrate a tunable multiwavelength fiber laser employing polarization-maintaining Tm-doped fiber based on polarization rotation and four-wave-mixing effect. Polarization-maintaining Tm-doped fiber and polarization controllers were employed to manipulate the polarization modes in the laser, and 400 m long single-mode passive fiber was used to enhance the four-wave-mixing effect and suppress the polarization mode competition. Stable fiber laser operation of 1-6 wavelengths around 1.9 μm was achieved at room temperatures. The wavelengths can be tuned through adjusting the polarization controllers. The optical signal-to-noise ratio of the laser is more than 31 dB. The wavelength shift is less than 0.05 nm and the peak fluctuation of each wavelength is analyzed. For most of the wavelengths the peak fluctuations are less than 3 dB and the peak fluctuations of wavelengths with more stability are below 1.5 dB.
Conrado, Daniela J; Nicholas, Timothy; Tsai, Kuenhi; Macha, Sreeraj; Sinha, Vikram; Stone, Julie; Corrigan, Brian; Bani, Massimo; Muglia, Pierandrea; Watson, Ian A; Kern, Volker D; Sheveleva, Elena; Marek, Kenneth; Stephenson, Diane T; Romero, Klaus
2017-07-27
Given the recognition that disease-modifying therapies should focus on earlier Parkinson's disease stages, trial enrollment based purely on clinical criteria poses significant challenges. The goal herein was to determine the utility of dopamine transporter neuroimaging as an enrichment biomarker in early motor Parkinson's disease clinical trials. Patient-level longitudinal data of 672 subjects with early-stage Parkinson's disease in the Parkinson's Progression Markers Initiative (PPMI) observational study and the Parkinson Research Examination of CEP-1347 Trial (PRECEPT) clinical trial were utilized in a linear mixed-effects model analysis. The rate of worsening in the motor scores between subjects with or without a scan without evidence of dopamine transporter deficit was different both statistically and clinically. The average difference in the change from baseline of motor scores at 24 months between biomarker statuses was -3.16 (90% confidence interval [CI] = -0.96 to -5.42) points. Dopamine transporter imaging could identify subjects with a steeper worsening of the motor scores, allowing trial enrichment and 24% reduction of sample size. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Input modelling for subchannel analysis of CANFLEX fuel bundle
Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-06-01
This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.
Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
SIMULATION MODELING AND ANALYSIS OF KEY LEADER ENGAGEMENTS by Clifford C. Wakeman June 2012 Thesis Co-Advisors: Arnold H. Buss Susan...DATE June 2012 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Discrete Event Simulation Modeling and Analysis of Key...for public release; distribution is unlimited DISCRETE EVENT SIMULATION MODELING AND ANALYSIS OF KEY LEADER ENGAGEMENTS Clifford C. Wakeman
Development of statistical models for data analysis
Downham, D.Y.
2000-07-01
Incidents that cause, or could cause, injury to personnel, and that satisfy specific criteria, are reported to the Offshore Safety Division (OSD) of the Health and Safety Executive (HSE). The underlying purpose of this report is to improve ways of quantifying risk, a recommendation in Lord Cullen's report into the Piper Alpha disaster. Records of injuries and hydrocarbon releases from 1 January, 1991, to 31 March 1996, are analysed, because the reporting of incidents was standardised after 1990. Models are identified for risk assessment and some are applied. The appropriate analyses of one or two factors (or variables) are tests of uniformity or of independence. Radar graphs are used to represent some temporal variables. Cusums are applied for the analysis of incident frequencies over time, and could be applied for regular monitoring. Log-linear models for Poisson-distributed data are identified as being suitable for identifying 'non-random' combinations of more than two factors. Some questions cannot be addressed with the available data: for example, more data are needed to assess the risk of injury per employee in a time interval. If the questions are considered sufficiently important, resources could be assigned to obtain the data. Some of the main results from the analyses are as follows: the cusum analyses identified a change-point at the end of July 1993, when the reported number of injuries reduced by 40%. Injuries were more likely to occur between 8am and 12am or between 2pm and 5pm than at other times: between 2pm and 3pm the number of injuries was almost twice the average and was more than three fold the smallest. No seasonal effects in the numbers of injuries were identified. Three-day injuries occurred more frequently on the 5th, 6th and 7th days into a tour of duty than on other days. Three-day injuries occurred less frequently on the 13th and 14th days of a tour of duty. An injury classified as 'lifting or craning' was
Model Analysis Assessing the dynamics of student learning
Bao, L; Bao, Lei; Redish, Edward F.
2002-01-01
In this paper we present a method of modeling and analysis that permits the extraction and quantitative display of detailed information about the effects of instruction on a class's knowledge. The method relies on a congitive model that represents student thinking in terms of mental models. Students frequently fail to recognize relevant conditions that lead to appropriate uses of their models. As a result they can use multiple models inconsistently. Once the most common mental models have been determined by qualitative research, they can be mapping onto a multiple choice test. Model analysis permits the interpretation of such a situation. We illustrate the use of our method by analyzing results from the FCI.
Analysis of Cortical Flow Models In Vivo
Benink, Hélène A.; Mandato, Craig A.; Bement, William M.
2000-01-01
Cortical flow, the directed movement of cortical F-actin and cortical organelles, is a basic cellular motility process. Microtubules are thought to somehow direct cortical flow, but whether they do so by stimulating or inhibiting contraction of the cortical actin cytoskeleton is the subject of debate. Treatment of Xenopus oocytes with phorbol 12-myristate 13-acetate (PMA) triggers cortical flow toward the animal pole of the oocyte; this flow is suppressed by microtubules. To determine how this suppression occurs and whether it can control the direction of cortical flow, oocytes were subjected to localized manipulation of either the contractile stimulus (PMA) or microtubules. Localized PMA application resulted in redirection of cortical flow toward the site of application, as judged by movement of cortical pigment granules, cortical F-actin, and cortical myosin-2A. Such redirected flow was accelerated by microtubule depolymerization, showing that the suppression of cortical flow by microtubules is independent of the direction of flow. Direct observation of cortical F-actin by time-lapse confocal analysis in combination with photobleaching showed that cortical flow is driven by contraction of the cortical F-actin network and that microtubules suppress this contraction. The oocyte germinal vesicle serves as a microtubule organizing center in Xenopus oocytes; experimental displacement of the germinal vesicle toward the animal pole resulted in localized flow away from the animal pole. The results show that 1) cortical flow is directed toward areas of localized contraction of the cortical F-actin cytoskeleton; 2) microtubules suppress cortical flow by inhibiting contraction of the cortical F-actin cytoskeleton; and 3) localized, microtubule-dependent suppression of actomyosin-based contraction can control the direction of cortical flow. We discuss these findings in light of current models of cortical flow. PMID:10930453
Quantile plots in the analysis of heteroscedastic models
Pepió Viñals, Montserrat; Polo Miranda, Carlos
1992-01-01
Recent developments in quality engineering methods have led to considerable interest in the analysis of variance, buiding a dispersion model, identifying important effects from replicated experiments...
Introduction to mixed modelling beyond regression and analysis of variance
Galwey, N W
2007-01-01
Mixed modelling is one of the most promising and exciting areas of statistical analysis, enabling more powerful interpretation of data through the recognition of random effects. However, many perceive mixed modelling as an intimidating and specialized technique.
EXPOSURE ANALYSIS MODELING SYSTEM (EXAMS): USER MANUAL AND SYSTEM DOCUMENTATION
The Exposure Analysis Modeling System, first published in 1982 (EPA-600/3-82-023), provides interactive computer software for formulating aquatic ecosystem models and rapidly evaluating the fate, transport, and exposure concentrations of synthetic organic chemicals - pesticides, ...
Analysis on Some of Software Reliability Models
无
2001-01-01
Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.
An Extensible Model and Analysis Framework
2010-11-01
for a total of 543 seconds. For comparison purposes, in interpreted mode, opening the model took 224 seconds and running the model took 217 seconds...contains 19683 entities. 9 A comparison of the key model complexity metrics may be found in Table 3. Table 3: Comparison of the model...Triquetrum/RCP supports assembling in arbitrary ways. (12/08 presentation) 2. Prototyped OSGi component architecture for use with Netbeans and
NBC Hazard Prediction Model Capability Analysis
1999-09-01
Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented
Analysis and modeling of parking behavior
无
2001-01-01
Analyzes the spatial structure of parking behavior and establishes a basic parking behavior model to represent the parking problem in downtown, and establishes a parking pricing model to analyze the parking equilibrium with a positive parking fee and uses a paired combinatorial logit model to analyze the effect of trip integrative cost on parking behavior and concludes from empirical results that the parking behavior model performs well.
Analysis and Modeling of Traffic in Modern Data Communication Networks
Babic, G.; Vandalore, B.; Jain, R.
1998-01-01
In performance analysis and design of communication netword modeling data traffic is important. With introduction of new applications, the characteristics of the data traffic changes. We present a brief review the different models of data traffic and how they have evolved. We present results of data traffic analysis and simulated traffic, which demonstrates that the packet train model fits the traffic at source destination level and long-memory (self-similar) model fits the traffic at the agg...
Finite element analysis to model complex mitral valve repair.
Labrosse, Michel; Mesana, Thierry; Baxter, Ian; Chan, Vincent
2016-01-01
Although finite element analysis has been used to model simple mitral repair, it has not been used to model complex repair. A virtual mitral valve model was successful in simulating normal and abnormal valve function. Models were then developed to simulate an edge-to-edge repair and repair employing quadrangular resection. Stress contour plots demonstrated increased stresses along the mitral annulus, corresponding to the annuloplasty. The role of finite element analysis in guiding clinical practice remains undetermined.
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank def
Likelihood analysis of the I(2) model
Johansen, Søren
1997-01-01
The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum like...
Model correction factor method for system analysis
Ditlevsen, Ove Dalager; Johannesen, Johannes M.
2000-01-01
severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... but with clearly defined failure modes, the MCFM can bestarted from each idealized single mode limit state in turn to identify a locally most central point on the elaborate limitstate surface. Typically this procedure leads to a fewer number of locally most central failure points on the elaboratelimit state...... surface than existing in the idealized model....
Modelling Immune System: Principles, Models,Analysis and Perspectives
Xiang-hua Li; Zheng-xuan Wang; Tian-yang Lu; Xiang-jiu Che
2009-01-01
The biological immune system is a complex adaptive system. There are lots of benefits for building the model of the immune system. For biological researchers, they can test some hypotheses about the infection process or simulate the responses of some drugs. For computer researchers, they can build distributed, robust and fault tolerant networks inspired by the functions of the immune system. This paper provides a comprehensive survey of the literatures on modelling the immune system. From the methodology perspective, the paper compares and analyzes the existing approaches and models, and also demonstrates the focusing research effort on the future immune models in the next few years.
Eclipsing binary stars modeling and analysis
Kallrath, Josef
1999-01-01
This book focuses on the formulation of mathematical models for the light curves of eclipsing binary stars, and on the algorithms for generating such models Since information gained from binary systems provides much of what we know of the masses, luminosities, and radii of stars, such models are acquiring increasing importance in studies of stellar structure and evolution As in other areas of science, the computer revolution has given many astronomers tools that previously only specialists could use; anyone with access to a set of data can now expect to be able to model it This book will provide astronomers, both amateur and professional, with a guide for - specifying an astrophysical model for a set of observations - selecting an algorithm to determine the parameters of the model - estimating the errors of the parameters It is written for readers with knowledge of basic calculus and linear algebra; appendices cover mathematical details on such matters as optimization, coordinate systems, and specific models ...
Electromagnetic cascade masquerade: a way to mimic $\\gamma$--ALP mixing effects in blazar spectra
Dzhatdoev, T A; Kircheva, A P; Lyukshin, A A
2016-01-01
Most of the studies on extragalactic {\\gamma}-ray propagation performed up to now only accounted for primary gamma-ray absorption and adiabatic losses (absorption-only model). However, there is growing evidence that this model is oversimplified and must be modified in some way. (...) There are many hints that a secondary component from electromagnetic cascades initiated by primary $\\gamma$-rays or nuclei may be observed in the spectra of some blazars. We study the impact of electromagnetic cascades from primary $\\gamma$-rays or protons on the physical interpretation of blazar spectra obtained with imaging Cherenkov telescopes. We use the publicly-available code ELMAG to compute observable spectra of electromagnetic cascades from primary $\\gamma$-rays. For the case of primary proton, we develop a simple, fast, and reasonably accurate hybrid method to calculate the observable spectrum. (...) Electromagnetic cascades show at least two very distinct regimes labeled by the energy of the primary $\\gamma$-ray ($E_{0...
Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models
Kruger, FJ
1985-03-01
Full Text Available This report outlines progress with the development of computer based dynamic simulation models for ecosystems in the fynbos biome. The models are planned to run on a portable desktop computer with 500 kbytes of memory, extended BASIC language...
Evaluation of Thermal Margin Analysis Models for SMART
Seo, Kyong Won; Kwon, Hyuk; Hwang, Dae Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2011-05-15
Thermal margin of SMART would be analyzed by three different methods. The first method is subchannel analysis by MATRA-S code and it would be a reference data for the other two methods. The second method is an on-line few channel analysis by FAST code that would be integrated into SCOPS/SCOMS. The last one is a single channel module analysis by safety analysis. Several thermal margin analysis models for SMART reactor core by subchannel analysis were setup and tested. We adopted a strategy of single stage analysis for thermal analysis of SMART reactor core. The model should represent characteristics of the SMART reactor core including hot channel. The model should be simple as possible to be evaluated within reasonable time and cost
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...
Information Retrieval Interaction: an Analysis of Models
Farahnaz Sadoughi
2012-03-01
Full Text Available Information searching process is an interactive process; thus users has control on searching process, and they can manage the results of the search process. In this process, user's question became more mature, according to retrieved results. In addition, on the side of the information retrieval system, there are some processes that could not be realized, unless by user. Practically, this issue, is egregious in “Interaction” -i.e. process of user connection to other system elements- and in “Relevance judgment”. This paper had a glance to existence of “Interaction” in information retrieval, in first. Then the tradition model of information retrieval and its strenght and weak points were reviewed. Finally, the current models of interactive information retrieval includes: Belkin episodic model, Ingwersen cognitive model, Sarasevic stratified model, and Spinks interactive feedback model were elucidated.
Phenomenological analysis of the interacting boson model
Hatch, R. L.; Levit, S.
1982-01-01
The classical Hamiltonian of the interacting boson model is defined and expressed in terms of the conventional quadrupole variables. This is used in the analyses of the dynamics in the various limits of the model. The purpose is to determine the range and the features of the collective phenomena which the interacting boson model is capable of describing. In the commonly used version of the interacting boson model with one type of the s and d bosons and quartic interactions, this capability has certain limitations and the model should be used with care. A more sophisticated version of the interacting boson model with neutron and proton bosons is not discussed. NUCLEAR STRUCTURE Interacting bosons, classical IBM Hamiltonian in quadrupole variables, phenomenological content of the IBM and its limitations.
A Bayesian Analysis of Spectral ARMA Model
Manoel I. Silvestre Bezerra
2012-01-01
Full Text Available Bezerra et al. (2008 proposed a new method, based on Yule-Walker equations, to estimate the ARMA spectral model. In this paper, a Bayesian approach is developed for this model by using the noninformative prior proposed by Jeffreys (1967. The Bayesian computations, simulation via Markov Monte Carlo (MCMC is carried out and characteristics of marginal posterior distributions such as Bayes estimator and confidence interval for the parameters of the ARMA model are derived. Both methods are also compared with the traditional least squares and maximum likelihood approaches and a numerical illustration with two examples of the ARMA model is presented to evaluate the performance of the procedures.
Modeling and analysis of biomass production systems
Mishoe, J.W.; Lorber, M.N.; Peart, R.M.; Fluck, R.C.; Jones, J.W.
1984-01-01
BIOMET is an interactive simulation model that is used to analyze specific biomass and methane production systems. The system model is composed of crop growth models, harvesting, transportation, conversion and economic submodels. By use of menus the users can configure the structure and set selected parameters of the system to analyze the effects of variables within the component models. For example, simulations of a water hyacinth system resulted in yields of 63, 48 and 37 mg/ha/year for different harvest schedules. For napier grass, unit methane costs were $3.04, $2.86 and $2.98 for various yields of biomass. 10 references.
The Modeling Analysis of Huangshan Tourism Data
Hu, Shanfeng; Yan, Xinhu; Zhu, Hongbing
2016-06-01
Tourism is the major industry in Huangshan city. This paper analyzes time series of tourism data to Huangshan from 2000 to 2013. The Yearly data set comprises the total arrivals of tourists, total income, Urban Resident Disposable Income Per Capital and Net Income Per Peasant. A mathematical model which is based on the binomial approximation and inverse quadratic radial basis function (RBF) is set up to model the tourist arrivals. The total income and urban resident disposable income per capital and net income per peasant are also modeled. It is shown that the established mathematical model can be used to forecast some tourism information and achieve a good management for Huangshan tourism.
Identifying nonlinear biomechanical models by multicriteria analysis
Srdjevic, Zorica; Cveticanin, Livija
2012-02-01
In this study, the methodology developed by Srdjevic and Cveticanin (International Journal of Industrial Ergonomics 34 (2004) 307-318) for the nonbiased (objective) parameter identification of the linear biomechanical model exposed to vertical vibrations is extended to the identification of n-degree of freedom (DOF) nonlinear biomechanical models. The dynamic performance of the n-DOF nonlinear model is described in terms of response functions in the frequency domain, such as the driving-point mechanical impedance and seat-to-head transmissibility function. For randomly generated parameters of the model, nonlinear equations of motion are solved using the Runge-Kutta method. The appropriate data transformation from the time-to-frequency domain is performed by a discrete Fourier transformation. Squared deviations of the response functions from the target values are used as the model performance evaluation criteria, thus shifting the problem into the multicriteria framework. The objective weights of criteria are obtained by applying the Shannon entropy concept. The suggested methodology is programmed in Pascal and tested on a 4-DOF nonlinear lumped parameter biomechanical model. The identification process over the 2000 generated sets of parameters lasts less than 20 s. The model response obtained with the imbedded identified parameters correlates well with the target values, therefore, justifying the use of the underlying concept and the mathematical instruments and numerical tools applied. It should be noted that the identified nonlinear model has an improved accuracy of the biomechanical response compared to the accuracy of a linear model.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Villeneuve, P.V.; Gerstl, S.A. [Los Alamos National Lab., NM (United States); Asner, G.P. [Univ. of Colorado, Boulder, CO (United States)
1998-12-01
A Monte-Carlo ray-trace model has been applied to simulated sparse vegetation desert canopies in an effort to quantify the spectral mixing (both linear and nonlinear) occurring as a result of radiative interactions between vegetation and soil. This work is of interest as NASA is preparing to launch new instruments such as MISR and MODIS. MISR will observe each ground pixel from nine different directions in three visible channels and one near-infrared channel. It is desired to study angular variations in spectral mixing by quantifying the amount of nonlinear spectral mixing occurring in the MISR observing directions.
An Object Extraction Model Using Association Rules and Dependence Analysis
无
2001-01-01
Extracting objects from legacy systems is a basic step insystem's obje ct-orientation to improve the maintainability and understandability of the syst e ms. A new object extraction model using association rules an d dependence analysis is proposed. In this model data are classified by associat ion rules and the corresponding operations are partitioned by dependence analysis.
Book review: Statistical Analysis and Modelling of Spatial Point Patterns
Møller, Jesper
2009-01-01
Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912......Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912...
A Multilevel Nonlinear Profile Analysis Model for Dichotomous Data
Culpepper, Steven Andrew
2009-01-01
This study linked nonlinear profile analysis (NPA) of dichotomous responses with an existing family of item response theory models and generalized latent variable models (GLVM). The NPA method offers several benefits over previous internal profile analysis methods: (a) NPA is estimated with maximum likelihood in a GLVM framework rather than…
BIFURCATION ANALYSIS OF A MITOTIC MODEL OF FROG EGGS
吕金虎; 张子范; 张锁春
2003-01-01
The mitotic model of frog eggs established by Borisuk and Tyson is qualitatively analyzed. The existence and stability of its steady states are further discussed. Furthermore, the bifurcation of above model is further investigated by using theoretical analysis and numerical simulations. At the same time, the numerical results of Tyson are verified by theoretical analysis.
Stochastic Analysis Method of Sea Environment Simulated by Numerical Models
刘德辅; 焦桂英; 张明霞; 温书勤
2003-01-01
This paper proposes the stochastic analysis method of sea environment simulated by numerical models, such as wave height, current field, design sea levels and longshore sediment transport. Uncertainty and sensitivity analysis of input and output factors of numerical models, their long-term distribution and confidence intervals are described in this paper.
Modelling and analysis of Markov reward automata
Guck, Dennis; Timmer, Mark; Hatefi, Hassan; Ruijters, Enno; Stoelinga, Mariëlle
2014-01-01
Costs and rewards are important ingredients for many types of systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Nonsynchronous Trading Model and Return Analysis
LIU Xiao-mao; LI Chu-lin; ZHANG Jun
2002-01-01
Nonsynchronous trading is one of the hot issues in financial high-frequency data processing.This paper extends the nonsynchronous trading model studied in [1] and [2] for the financial security, and considers the moment functions of the observable return series for the extended model. At last, the estimators of parameters are obtained.
Multivariate Model for Test Response Analysis
Krishnan, Shaji; Krishnan, Shaji; Kerkhoff, Hans G.
2010-01-01
A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage
Multivariate model for test response analysis
Krishnan, S.; Kerkhoff, H.G.
2010-01-01
A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai
Multivariate model for test response analysis
Krishnan, S.; Kerkhoff, H.G.
2010-01-01
A systematic approach to construct an effective multivariate test response model for capturing manufacturing defects in electronic products is described. The effectiveness of the model is demonstrated by its capability in reducing the number of test-points, while achieving the maximal coverage attai
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Perturbative analysis of gauged matrix models
Dijkgraaf, Robbert; Gukov, Sergei; Kazakov, Vladimir A.; Vafa, Cumrun
2003-08-01
We analyze perturbative aspects of gauged matrix models, including those where classically the gauge symmetry is partially broken. Ghost fields play a crucial role in the Feynman rules for these vacua. We use this formalism to elucidate the fact that nonperturbative aspects of N=1 gauge theories can be computed systematically using perturbative techniques of matrix models, even if we do not possess an exact solution for the matrix model. As examples we show how the Seiberg-Witten solution for N=2 gauge theory, the Montonen-Olive modular invariance for N=1*, and the superpotential for the Leigh-Strassler deformation of N=4 can be systematically computed in perturbation theory of the matrix model or gauge theory (even though in some of these cases an exact answer can also be obtained by summing up planar diagrams of matrix models).
Perturbative Analysis of Gauged Matrix Models
Dijkgraaf, R; Kazakov, V A; Vafa, C; Dijkgraaf, Robbert; Gukov, Sergei; Kazakov, Vladimir A.; Vafa, Cumrun
2003-01-01
We analyze perturbative aspects of gauged matrix models, including those where classically the gauge symmetry is partially broken. Ghost fields play a crucial role in the Feynman rules for these vacua. We use this formalism to elucidate the fact that non-perturbative aspects of N=1 gauge theories can be computed systematically using perturbative techniques of matrix models, even if we do not possess an exact solution for the matrix model. As examples we show how the Seiberg-Witten solution for N=2 gauge theory, the Montonen-Olive modular invariance for N=1*, and the superpotential for the Leigh-Strassler deformation of N=4 can be systematically computed in perturbation theory of the matrix model/gauge theory (even though in some of these cases the exact answer can also be obtained by summing up planar diagrams of matrix models).
Analysis and modeling of solar irradiance variations
Yeo, K L
2014-01-01
A prominent manifestation of the solar dynamo is the 11-year activity cycle, evident in indicators of solar activity, including solar irradiance. Although a relationship between solar activity and the brightness of the Sun had long been suspected, it was only directly observed after regular satellite measurements became available with the launch of Nimbus-7 in 1978. The measurement of solar irradiance from space is accompanied by the development of models aimed at describing the apparent variability by the intensity excess/deficit effected by magnetic structures in the photosphere. The more sophisticated models, termed semi-empirical, rely on the intensity spectra of photospheric magnetic structures generated with radiative transfer codes from semi-empirical model atmospheres. An established example of such models is SATIRE-S (Spectral And Total Irradiance REconstruction for the Satellite era). One key limitation of current semi-empirical models is the fact that the radiant properties of network and faculae a...
Analysis of Modeling Parameters on Threaded Screws.
Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
An Intelligent Analysis Model for Multisource Volatile Memory
Xiaolu Zhang
2013-09-01
Full Text Available For the rapidly development of network and distributed computing environment, it make researchers harder to do analysis examines only from one or few pieces of data source in persistent data-oriented approaches, so as the volatile memory analysis either. Therefore, mass data automatically analysis and action modeling needs to be considered for reporting entire network attack process. To model multiple volatile data sources situation can help understand and describe both thinking process of investigator and possible action step for attacker. This paper presents a Game model for multisource volatile data and applies it to main memory images analysis with the definition of space-time feature for volatile element information. Abstract modeling allows the lessons gleaned in performing intelligent analysis, evidence filing and automating presentation. Finally, a test demo based on the model is also present to illustrate the whole procedure
Mathematical analysis and uncertain models of a nitrification process
Harmand, J.; Steyer, J.P.; Queinnec, I.; Bernet, N.
1995-12-31
The non linear model of a Continuous Strirred Tank Reactor (CSTR) for nitrogen removal, derived from mass balance consideration, can be linearized around a nominal steady state. The analysis of the linear model in terms of stability, observability and controllability allows to highlight the structural properties of the model. Disturbances and uncertainties can then be explicitly expressed in the linear model, such that it completes the modelling in view of a future control scheme of the process. (authors) 13 refs.
van Voorn, George A. K.; Kooi, Bob W.
2017-06-01
Plato's well-known allegory of the cave describes an observer chained in a cave facing a blank wall on which shadows are projected of objects that are outside the cave. Only by breaking free from the chains can the observer submerge from the cave to see what the objects really look like. Ecological model features compare to the objects outside the cave in this allegory. By performing model analysis light is shed on these features, creating projections that researchers can see. Model analysis methodologies like bifurcation analysis and sensitivity analysis each focus on particular model features and thus allow researchers to uncover only part of the model behaviour. By combining methodologies for model analysis possibilities arise for unravelling more of the model's behaviour, allowing researchers to `break free'. In this paper benefits and issues of combining model analysis methodologies are discussed using a case study. The case study involves three representations of the well-known Rosenzweig-MacArthur predator-prey model, namely the usual one where state variables and parameters have dimensions, a dimensionless representation, and a generalized representation. Based on the results we argue that researchers should combine bifurcation and sensitivity analysis methodologies when analyzing ecological models.
Root analysis and implications to analysis model in ATLAS
Shibata, A
2008-01-01
An impressive amount of effort has been put in to realize a set of frameworks to support analysis in this new paradigm of GRID computing. However, much more than half of a physicist's time is typically spent after the GRID processing of the data. Due to the private nature of this level of analysis, there has been little common framework or methodology. While most physicists agree to use ROOT as the basis of their analysis, a number of approaches are possible for the implementation of the analysis using ROOT: conventional methods using CINT/ACLiC, development using g++, alternative interface through python, and parallel processing methods such as PROOF are some of the choices currently available on the market. Furthermore, in the ATLAS collaboration an additional layer of technology adds to the complexity because the data format is based on the POOL technology, which tends to be less portable. In this study, various modes of ROOT analysis are profiled for comparison with the main focus on the processing speed....
Aircraft vulnerability analysis by modelling and simulation
Willers, CJ
2014-09-01
Full Text Available attributable to misuse of the weapon or to missile performance restrictions. This paper analyses some of the factors affecting aircraft vulnerability and demonstrates a structured analysis of the risk and aircraft vulnerability problem. The aircraft...
Modeling and analysis of offshore jacket platform
Mohan, P.; Sidhaarth, K.R.A.; SanilKumar, V.
This paper details the results obtained from static analysis, considering both operational and ultimate limit state performance characteristics of a jacket platform, designed at Mumbai high basin. The existing platforms at Mumbai high region...
On Model Selection Criteria in Multimodel Analysis
Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.
2008-03-21
Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true
Probabilistic forward model for electroencephalography source analysis
Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); George, John S [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Volegov, Petr L [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2007-09-07
Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates.
Model-free linkage analysis of a binary trait.
Xu, Wei; Bull, Shelley B; Mirea, Lucia; Greenwood, Celia M T
2012-01-01
Genetic linkage analysis aims to detect chromosomal regions containing genes that influence risk of specific inherited diseases. The presence of linkage is indicated when a disease or trait cosegregates through the families with genetic markers at a particular region of the genome. Two main types of genetic linkage analysis are in common use, namely model-based linkage analysis and model-free linkage analysis. In this chapter, we focus solely on the latter type and specifically on binary traits or phenotypes, such as the presence or absence of a specific disease. Model-free linkage analysis is based on allele-sharing, where patterns of genetic similarity among affected relatives are compared to chance expectations. Because the model-free methods do not require the specification of the inheritance parameters of a genetic model, they are preferred by many researchers at early stages in the study of a complex disease. We introduce the history of model-free linkage analysis in Subheading 1. Table 1 describes a standard model-free linkage analysis workflow. We describe three popular model-free linkage analysis methods, the nonparametric linkage (NPL) statistic, the affected sib-pair (ASP) likelihood ratio test, and a likelihood approach for pedigrees. The theory behind each linkage test is described in this section, together with a simple example of the relevant calculations. Table 4 provides a summary of popular genetic analysis software packages that implement model-free linkage models. In Subheading 2, we work through the methods on a rich example providing sample software code and output. Subheading 3 contains notes with additional details on various topics that may need further consideration during analysis.
Information analysis for modeling and representation of meaning
Uda, Norihiko
1994-01-01
In this dissertation, information analysis and an information model called the Semantic Structure Model based on information analysis are explained for semantic processing. Methods for self organization of information are also described. In addition, Information-Base Systems for thinking support of research and development in non linear optical materials are explained. As a result of information analysis, general properties of information and structural properties of concepts become clear. Ge...
Mathematical Modeling for Simulation of Nuclear Reactor Analysis
Salah Ud-Din Khan; Shahab Ud-Din Khan
2013-01-01
In this paper, we have developed a mathematical model for the nuclear reactor analysis to be implemented in the nuclear reactor code. THEATRe is nuclear reactor analysis code which can only work for the cylindrical type fuel reactor and cannot applicable for the plate type fuel nuclear reactor. Therefore, the current studies encompasses on the modification of THEATRe code for the plate type fuel element. This mathematical model is applicable to the thermal analysis of the reactor which is ver...
Application of dimensional analysis in systems modeling and control design
Balaguer, Pedro
2013-01-01
Dimensional analysis is an engineering tool that is widely applied to numerous engineering problems, but has only recently been applied to control theory and problems such as identification and model reduction, robust control, adaptive control, and PID control. Application of Dimensional Analysis in Systems Modeling and Control Design provides an introduction to the fundamentals of dimensional analysis for control engineers, and shows how they can exploit the benefits of the technique to theoretical and practical control problems.
Health care policy development: a critical analysis model.
Logan, Jean E; Pauling, Carolyn D; Franzen, Debra B
2011-01-01
This article describes a phased approach for teaching baccalaureate nursing students critical analysis of health care policy, including refinement of existing policy or the foundation to create new policy. Central to this approach is the application of an innovative framework, the Grand View Critical Analysis Model, which was designed to provide a conceptual base for the authentic learning experience. Students come to know the interconnectedness and the importance of the model, which includes issue selection and four phases: policy focus, colleagueship analysis, evidence-based practice analysis, and policy analysis and development.
Multifractal modelling and 3D lacunarity analysis
Hanen, Akkari, E-mail: bettaieb.hanen@topnet.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia); Imen, Bhouri, E-mail: bhouri_imen@yahoo.f [Unite de recherche ondelettes et multifractals, Faculte des sciences (Tunisia); Asma, Ben Abdallah, E-mail: asma.babdallah@cristal.rnu.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia); Patrick, Dubois, E-mail: pdubois@chru-lille.f [INSERM, U 703, Lille (France); Hedi, Bedoui Mohamed, E-mail: medhedi.bedoui@fmm.rnu.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia)
2009-09-28
This study presents a comparative evaluation of lacunarity of 3D grey level models with different types of inhomogeneity. A new method based on the 'Relative Differential Box Counting' was developed to estimate the lacunarity features of grey level volumes. To validate our method, we generated a set of 3D grey level multifractal models with random, anisotropic and hierarchical properties. Our method gives a lacunarity measurement correlated with the theoretical one and allows a better model classification compared with a classical approach.
Multifractal modelling and 3D lacunarity analysis
Hanen, Akkari; Imen, Bhouri; Asma, Ben Abdallah; Patrick, Dubois; Hédi, Bedoui Mohamed
2009-09-01
This study presents a comparative evaluation of lacunarity of 3D grey level models with different types of inhomogeneity. A new method based on the “Relative Differential Box Counting” was developed to estimate the lacunarity features of grey level volumes. To validate our method, we generated a set of 3D grey level multifractal models with random, anisotropic and hierarchical properties. Our method gives a lacunarity measurement correlated with the theoretical one and allows a better model classification compared with a classical approach.
Analysis of Brown camera distortion model
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
MODAL ANALYSIS OF QUARTER CAR MODEL SUSPENSION SYSTEM
Viswanath. K. Allamraju *
2016-01-01
Suspension system is very important for comfort driving and travelling of the passengers. Therefore, this study provides a numerical tool for modeling and analyzing of a two degree of freedom quarter car model suspension system. Modal analysis places a vital role in designing the suspension system. In this paper presented the modal analysis of quarter car model suspension system by considering the undamped and damped factors. The modal and vertical equations of motions describing the su...
Quantitative Models and Analysis for Reactive Systems
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Complex networks analysis in socioeconomic models
Varela, Luis M; Ausloos, Marcel; Carrete, Jesus
2014-01-01
This chapter aims at reviewing complex networks models and methods that were either developed for or applied to socioeconomic issues, and pertinent to the theme of New Economic Geography. After an introduction to the foundations of the field of complex networks, the present summary adds insights on the statistical mechanical approach, and on the most relevant computational aspects for the treatment of these systems. As the most frequently used model for interacting agent-based systems, a brief description of the statistical mechanics of the classical Ising model on regular lattices, together with recent extensions of the same model on small-world Watts-Strogatz and scale-free Albert-Barabasi complex networks is included. Other sections of the chapter are devoted to applications of complex networks to economics, finance, spreading of innovations, and regional trade and developments. The chapter also reviews results involving applications of complex networks to other relevant socioeconomic issues, including res...
Flood Progression Modelling and Impact Analysis
Mioc, Darka; Anton, François; Nickerson, B.
People living in the lower valley of the St. John River, New Brunswick, Canada, frequently experience flooding when the river overflows its banks during spring ice melt and rain. To better prepare the population of New Brunswick for extreme flooding, we developed a new flood prediction model...... that computes floodplain polygons before the flood occurs. This allows emergency managers to access the impact of the flood before it occurs and make the early decisions for evacuation of the population and flood rescue. This research shows that the use of GIS and LiDAR technologies combined with hydrological...... modelling can significantly improve the decision making and visualization of flood impact needed for emergency planning and flood rescue. Furthermore, the 3D GIS application we developed for modelling flooded buildings and infrastructure provides a better platform for modelling and visualizing flood...
Urban drainage models - making uncertainty analysis simple
Vezzaro, Luca; Mikkelsen, Peter Steen; Deletic, Ana;
2012-01-01
There is increasing awareness about uncertainties in modelling of urban drainage systems and, as such, many new methods for uncertainty analyses have been developed. Despite this, all available methods have limitations which restrict their widespread application among practitioners. Here, a modif...... probability distributions (often used for sensitivity analyses) and prediction intervals. To demonstrate the new method, it is applied to a conceptual rainfall-runoff model using a dataset collected from Melbourne, Australia....
Numerical bifurcation analysis of immunological models with time delays
Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady
2005-12-01
In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.
Structural dynamic analysis with generalized damping models analysis
Adhikari , Sondipon
2013-01-01
Since Lord Rayleigh introduced the idea of viscous damping in his classic work ""The Theory of Sound"" in 1877, it has become standard practice to use this approach in dynamics, covering a wide range of applications from aerospace to civil engineering. However, in the majority of practical cases this approach is adopted more for mathematical convenience than for modeling the physics of vibration damping. Over the past decade, extensive research has been undertaken on more general ""non-viscous"" damping models and vibration of non-viscously damped systems. This book, along with a related book
Relativistic AGN jets II. Jet properties and mixing effects for episodic jet activity
Walg, Sander; Markoff, Sera; Keppens, Rony; Porth, Oliver
2013-01-01
Various radio galaxies show signs of having gone through episodic jet outbursts in the past. An example is the class of double-double radio galaxies (DDRGs). However, to follow the evolution of an individual source in real-time is impossible due to the large time scales involved. Numerical studies provide a powerful tool to investigate the temporal behavior of episodic jet outbursts in a (magneto-)hydrodynamical setting. We simulate the injection of two jets from active galactic nuclei (AGN), separated by a short interruption time. Three different jet models are compared. We find that an AGN jet outburst cycle can be divided into four phases. The most prominent phase occurs when the restarted jet is propagating completely inside the hot and inflated cocoon left behind by the initial jet. In that case, the jet-head advance speed of the restarted jet is significantly higher than the initial jet-head. While the head of the initial jet interacts strongly with the ambient medium, the restarted jet propagates almos...
Numerical simulations of gas mixing effect in Electron Cyclotron Resonance Ion Sources
Mironov, V; Bondarchenko, A; Efremov, A; Loginov, V
2016-01-01
The particle-in-cell MCC code NAM-ECRIS is used to simulate the ECRIS plasma sustained in a mixture of Kr with O2, N2, Ar, Ne and He. The model assumes that ions are electrostatically confined in ECR zone by a dip in the plasma potential. Gain in the extracted krypton ion currents is seen for the highest charge states; the gain is maximized when oxygen is used as the mixing gas. A special feature of oxygen is that most of singly charged oxygen ions are produced after dissociative ionization of oxygen molecules with the large kinetic energy release of around 5 eV per ion. Increased loss rate of energetic lowly charged ions of the mixing element requires building up of the retarding potential barrier close to ECR surface to equilibrate electron and ion losses out of the plasma. In the mixed plasmas, the barrier value is large (~1 V) compared to the pure Kr plasma (~0.01 V), with the longer confinement times of krypton ions and with the much higher ion temperatures.
Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation
2009-11-12
2007 to 29-11-2008 4. TITLE AND SUBTITLE Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation 5a. CONTRACT NUMBER...Multilevel Modeling Portal (www.ats.ucla.edu/stat/ mlm /) and the Web site of the Center for Multilevel Modeling (http://multilevel.ioe.ac.uk/index.html
Validity of covariance models for the analysis of geographical variation
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
attention lately and show that the conditions under which they are valid mathematical models have been overlooked so far. 3. We provide rigorous results for the construction of valid covariance models in this family. 4. We also outline how to construct alternative covariance models for the analysis...
Digital Avionics Information System (DAIS): Training Requirements Analysis Model (TRAMOD).
Czuchry, Andrew J.; And Others
The training requirements analysis model (TRAMOD) described in this report represents an important portion of the larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. TRAMOD is the second of three models that comprise an LCC impact modeling system for use in the early stages of system development. As…
System Reliability Analysis Capability and Surrogate Model Application in RAVEN
Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.
2015-11-01
This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.
Rotordynamics analysis of a Jeffcott model with deadband
Zalik, R. A.
A method is developed for determining the stability margins of a simple Jeffcott model with deadband via analysis of the discrete Fourier transform of the system response. The model in question is of a uniform, unbalanced, flexible shaft that is supported by a bearing as it rotates about its x axis. This model is represented by a system of coupled nonlinear differential equations.
Multisurface interface model for analysis of masonry structures
Lourenço, P.B.; Rots, J.G.
1997-01-01
The performance of an interface elastoplastic constitutive model for the analysis of unreinforced masonry structures is evaluated. Both masonry components are discretized aiming at a rational unit-joint model able to describe cracking, slip, and crushing of the material. The model is formulated in t
TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION
Goran Klepac
2007-12-01
Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.
Numerical simulations of gas mixing effect in electron cyclotron resonance ion sources
Mironov, V.; Bogomolov, S.; Bondarchenko, A.; Efremov, A.; Loginov, V.
2017-01-01
The particle-in-cell Monte Carlo collisions code nam-ecris is used to simulate the electron cyclotron resonance ion source (ECRIS) plasma sustained in a mixture of Kr with O2 , N2 , Ar, Ne, and He. The model assumes that ions are electrostatically confined in the ECR zone by a dip in the plasma potential. A gain in the extracted krypton ion currents is seen for the highest charge states; the gain is maximized when oxygen is used as a mixing gas. The special feature of oxygen is that most of the singly charged oxygen ions are produced after the dissociative ionization of oxygen molecules with a large kinetic energy release of around 5 eV per ion. The increased loss rate of energetic lowly charged ions of the mixing element requires a building up of the retarding potential barrier close to the ECR surface to equilibrate electron and ion losses out of the plasma. In the mixed plasmas, the barrier value is large (˜1 V ) compared to pure Kr plasma (˜0.01 V ), with longer confinement times of krypton ions and with much higher ion temperatures. The temperature of the krypton ions is increased because of extra heating by the energetic oxygen ions and a longer time of ion confinement. In calculations, a drop of the highly charged ion currents of lighter elements is observed when adding small fluxes of krypton into the source. This drop is caused by the accumulation of the krypton ions inside plasma, which decreases the electron and ion confinement times.
Numerical simulations of gas mixing effect in electron cyclotron resonance ion sources
V. Mironov
2017-01-01
Full Text Available The particle-in-cell Monte Carlo collisions code nam-ecris is used to simulate the electron cyclotron resonance ion source (ECRIS plasma sustained in a mixture of Kr with O_{2}, N_{2}, Ar, Ne, and He. The model assumes that ions are electrostatically confined in the ECR zone by a dip in the plasma potential. A gain in the extracted krypton ion currents is seen for the highest charge states; the gain is maximized when oxygen is used as a mixing gas. The special feature of oxygen is that most of the singly charged oxygen ions are produced after the dissociative ionization of oxygen molecules with a large kinetic energy release of around 5 eV per ion. The increased loss rate of energetic lowly charged ions of the mixing element requires a building up of the retarding potential barrier close to the ECR surface to equilibrate electron and ion losses out of the plasma. In the mixed plasmas, the barrier value is large (∼1 V compared to pure Kr plasma (∼0.01 V, with longer confinement times of krypton ions and with much higher ion temperatures. The temperature of the krypton ions is increased because of extra heating by the energetic oxygen ions and a longer time of ion confinement. In calculations, a drop of the highly charged ion currents of lighter elements is observed when adding small fluxes of krypton into the source. This drop is caused by the accumulation of the krypton ions inside plasma, which decreases the electron and ion confinement times.
Sema A. Kalaian
2003-06-01
Full Text Available The objectives of the present mixed-effects meta-analytic application are to provide practical guidelines to: (a Calculate..treatment effect sizes from multiple sites; (b Calculate the overall mean of the site effect sizes and their variances; (c..Model the heterogeneity in these site treatment effects as a function of site and program characteristics plus..unexplained random error using Hierarchical Linear Modeling (HLM; (d Improve the ability of multisite evaluators..and policy makers to reach sound conclusions about the effectiveness of educational and social interventions based on..multisite evaluations; and (e Illustrate the proposed methodology by applying these methods to real multi-site research..data.
[Decision analysis in radiology using Markov models].
Golder, W
2000-01-01
Markov models (Multistate transition models) are mathematical tools to simulate a cohort of individuals followed over time to assess the prognosis resulting from different strategies. They are applied on the assumption that persons are in one of a finite number of states of health (Markov states). Each condition is given a transition probability as well as an incremental value. Probabilities may be chosen constant or varying over time due to predefined rules. Time horizon is divided into equal increments (Markov cycles). The model calculates quality-adjusted life expectancy employing real-life units and values and summing up the length of time spent in each health state adjusted for objective outcomes and subjective appraisal. This sort of modeling prognosis for a given patient is analogous to utility in common decision trees. Markov models can be evaluated by matrix algebra, probabilistic cohort simulation and Monte Carlo simulation. They have been applied to assess the relative benefits and risks of a limited number of diagnostic and therapeutic procedures in radiology. More interventions should be submitted to Markov analyses in order to elucidate their cost-effectiveness.
COMPUTER DATA ANALYSIS AND MODELING: COMPLEX STOCHASTIC DATA AND SYSTEMS
2010-01-01
This collection of papers includes proceedings of the Ninth International Conference “Computer Data Analysis and Modeling: Complex Stochastic Data and Systems” organized by the Belarusian State University and held in September 2010 in Minsk. The papers are devoted to the topical problems: robust and nonparametric data analysis; statistical analysis of time series and forecasting; multivariate data analysis; design of experiments; statistical signal and image processing...
Quantitative Models and Analysis for Reactive Systems
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided......, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...
Bayesian analysis in moment inequality models
Liao, Yuan; 10.1214/09-AOS714
2010-01-01
This paper presents a study of the large-sample behavior of the posterior distribution of a structural parameter which is partially identified by moment inequalities. The posterior density is derived based on the limited information likelihood. The posterior distribution converges to zero exponentially fast on any $\\delta$-contraction outside the identified region. Inside, it is bounded below by a positive constant if the identified region is assumed to have a nonempty interior. Our simulation evidence indicates that the Bayesian approach has advantages over frequentist methods, in the sense that, with a proper choice of the prior, the posterior provides more information about the true parameter inside the identified region. We also address the problem of moment and model selection. Our optimality criterion is the maximum posterior procedure and we show that, asymptotically, it selects the true moment/model combination with the most moment inequalities and the simplest model.
Comparative analysis of Goodwin's business cycle models
Antonova, A. O.; Reznik, S.; Todorov, M. D.
2016-10-01
We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.
Generative models for pedestrian track analysis
Kooij, J.F.P.
2015-01-01
Various problems in tracking and track analysis are addressed, with a focus on applications in the surveillance and intelligent vehicle domains, such as pedestrian path prediction, learning spatial and temporal structure of behavior patterns in data, anomalous track detection, and data association w
Generative models for pedestrian track analysis
Kooij, J.F.P.
2015-01-01
Various problems in tracking and track analysis are addressed, with a focus on applications in the surveillance and intelligent vehicle domains, such as pedestrian path prediction, learning spatial and temporal structure of behavior patterns in data, anomalous track detection, and data association
Advanced Placement: Model Policy Components. Policy Analysis
Zinth, Jennifer
2016-01-01
Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…
Three dimensional mathematical model of tooth for finite element analysis
Puškar Tatjana
2010-01-01
Full Text Available Introduction. The mathematical model of the abutment tooth is the starting point of the finite element analysis of stress and deformation of dental structures. The simplest and easiest way is to form a model according to the literature data of dimensions and morphological characteristics of teeth. Our method is based on forming 3D models using standard geometrical forms (objects in programmes for solid modeling. Objective. Forming the mathematical model of abutment of the second upper premolar for finite element analysis of stress and deformation of dental structures. Methods. The abutment tooth has a form of a complex geometric object. It is suitable for modeling in programs for solid modeling SolidWorks. After analyzing the literature data about the morphological characteristics of teeth, we started the modeling dividing the tooth (complex geometric body into simple geometric bodies (cylinder, cone, pyramid,.... Connecting simple geometric bodies together or substricting bodies from the basic body, we formed complex geometric body, tooth. The model is then transferred into Abaqus, a computational programme for finite element analysis. Transferring the data was done by standard file format for transferring 3D models ACIS SAT. Results. Using the programme for solid modeling SolidWorks, we developed three models of abutment of the second maxillary premolar: the model of the intact abutment, the model of the endodontically treated tooth with two remaining cavity walls and the model of the endodontically treated tooth with two remaining walls and inserted post. Conclusion Mathematical models of the abutment made according to the literature data are very similar with the real abutment and the simplifications are minimal. These models enable calculations of stress and deformation of the dental structures. The finite element analysis provides useful information in understanding biomechanical problems and gives guidance for clinical research.
[Three dimensional mathematical model of tooth for finite element analysis].
Puskar, Tatjana; Vasiljević, Darko; Marković, Dubravka; Jevremović, Danimir; Pantelić, Dejan; Savić-Sević, Svetlana; Murić, Branka
2010-01-01
The mathematical model of the abutment tooth is the starting point of the finite element analysis of stress and deformation of dental structures. The simplest and easiest way is to form a model according to the literature data of dimensions and morphological characteristics of teeth. Our method is based on forming 3D models using standard geometrical forms (objects) in programmes for solid modeling. Forming the mathematical model of abutment of the second upper premolar for finite element analysis of stress and deformation of dental structures. The abutment tooth has a form of a complex geometric object. It is suitable for modeling in programs for solid modeling SolidWorks. After analysing the literature data about the morphological characteristics of teeth, we started the modeling dividing the tooth (complex geometric body) into simple geometric bodies (cylinder, cone, pyramid,...). Connecting simple geometric bodies together or substricting bodies from the basic body, we formed complex geometric body, tooth. The model is then transferred into Abaqus, a computational programme for finite element analysis. Transferring the data was done by standard file format for transferring 3D models ACIS SAT. Using the programme for solid modeling SolidWorks, we developed three models of abutment of the second maxillary premolar: the model of the intact abutment, the model of the endodontically treated tooth with two remaining cavity walls and the model of the endodontically treated tooth with two remaining walls and inserted post. Mathematical models of the abutment made according to the literature data are very similar with the real abutment and the simplifications are minimal. These models enable calculations of stress and deformation of the dental structures. The finite element analysis provides useful information in understanding biomechanical problems and gives guidance for clinical research.
SWOT Analysis of Software Development Process Models
Ashish B. Sasankar
2011-09-01
Full Text Available Software worth billions and trillions of dollars have gone waste in the past due to lack of proper techniques used for developing software resulting into software crisis. Historically , the processes of software development has played an important role in the software engineering. A number of life cycle models have been developed in last three decades. This paper is an attempt to Analyze the software process model using SWOT method. The objective is to identify Strength ,Weakness ,Opportunities and Threats of Waterfall, Spiral, Prototype etc.
MODELING AND ANALYSIS OF REGIONAL BOUNDARY SYSTEM
YAN Guangle; WANG Huanchen
2001-01-01
In this paper, the problems of modeling and analyzing the system with change able boundary are researched. First, a kind of expanding system is set up, in which the changeable boundary is dealt with as a regional boundary. Then some relative models are developed to describe the regional boundary system. Next, the transition or the driftage of bifurcation points in the system is discussed. A fascinating case is studied in which two or more than two classes of chaotic attractive points coexist together or exist alternatively in the same system. Lastly, an effective new method of chaos avoidance for the system is put forward.
MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS
Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore
2006-05-01
This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.
Theoretical analysis and modeling for nanoelectronics
Baccarani, Giorgio; Gnani, Elena; Gnudi, Antonio; Reggiani, Susanna
2016-11-01
In this paper we review the evolution of Microelectronics and its transformation into Nanoelectronics, following the predictions of Moore's law, and some of the issues related with this evolution. Next, we discuss the requirements of device modeling and the solutions proposed throughout the years to address the physical effects related with an extreme device miniaturization, such as hot-electron effects, band splitting into multiple sub-bands, quasi-ballistic transport and electron tunneling. The most important physical models are shortly highlighted, and a few simulation results of heterojunction TFETs are reported and discussed.
CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE
José Luis Bernal Agudo
2015-06-01
Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.
Applications of model theory to functional analysis
Iovino, Jose
2014-01-01
During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the
QUALITATIVE ANALYSIS OF BOBWHITE QUAIL POPULATION MODEL
李先义; 朱德明
2003-01-01
In this paper, the qualitative behavior of solutions of the bobwhite quail pop-ulation modelwhere 0 ＜ a ＜ 1 ＜ a + b,p,c ∈ (0, ∞) and k is a nonnegative integer, is investigated.Some necessary and suficient as well as sufficient conditions for all solutions of the modelto oscillate and some sufficient conditions for all positive solutions of the model to benonoscillatory and the convergence of nonoscillatory solutions are derived. Furthermore,the permanence of every positive solution of the model is also showed. Many known resultsare improved and extended and some new results are obtained for G. Ladas' open problems.
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
Yaghoobi, Mehrdad; Gribonval, Remi; Davies, Mike E
2012-01-01
We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical lear...
Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis
Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.
2013-12-01
Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.
Vibration analysis with MADYMO human models
Verver, M.M.; Hoof, J.F.A.M. van
2002-01-01
The importance of comfort for the automotive industry is increasing. Car manufacturers use comfort to distinguish their products from their competitors. However, the development and design of a new car seat or interior is very time consuming and expensive. The introduction of computer models of huma
Urban drainage models - making uncertainty analysis simple
Vezzaro, Luca; Mikkelsen, Peter Steen; Deletic, Ana
2012-01-01
There is increasing awareness about uncertainties in modelling of urban drainage systems and, as such, many new methods for uncertainty analyses have been developed. Despite this, all available methods have limitations which restrict their widespread application among practitioners. Here, a modif...
Feature Analysis for Modeling Game Content Quality
Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...
Future of human models for crash analysis
Wismans, J.S.H.M.; Happee, R.; Hoof, J.F.A.M. van; Lange, R. de
2001-01-01
In the crash safety field mathematical models can be applied in practically all area's of research and development including: reconstruction of actual accidents, design (CAD) of the crash response of vehicles, safety devices and roadside facilities and in support of human impact biomechanical
Characteristic Analysis of Fire Modeling Codes
Lee, Yoon Hwan; Yang, Joon Eon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hoon [Kyeongmin College, Ujeongbu (Korea, Republic of)
2004-04-15
This report documents and compares key features of four zone models: CFAST, COMPBRN IIIE, MAGIC and the Fire Induced Vulnerability Evaluation (FIVE) methodology. CFAST and MAGIC handle multi-compartment, multi-fire problems, using many equations; COMPBRN and FIVE handle single compartment, single fire source problems, using simpler equation. The increased rigor of the formulation of CFAST and MAGIC does not mean that these codes are more accurate in every domain; for instance, the FIVE methodology uses a single zone approximation with a plume/ceiling jet sublayer, while the other models use a two-zone treatment without a plume/ceiling jet sublayer. Comparisons with enclosure fire data indicate that inclusion of plume/ceiling jet sublayer temperatures is more conservative, and generally more accurate than neglecting them. Adding a plume/ceiling jet sublayer to the two-zone models should be relatively straightforward, but it has not been done yet for any of the two-zone models. Such an improvement is in progress for MAGIC.
Financial Markets Analysis by Probabilistic Fuzzy Modelling
J.H. van den Berg (Jan); W.-M. van den Bergh (Willem-Max); U. Kaymak (Uzay)
2003-01-01
textabstractFor successful trading in financial markets, it is important to develop financial models where one can identify different states of the market for modifying one???s actions. In this paper, we propose to use probabilistic fuzzy systems for this purpose. We concentrate on Takagi???Sugeno (
Social Ecological Model Analysis for ICT Integration
Zagami, Jason
2013-01-01
ICT integration of teacher preparation programmes was undertaken by the Australian Teaching Teachers for the Future (TTF) project in all 39 Australian teacher education institutions and highlighted the need for guidelines to inform systemic ICT integration approaches. A Social Ecological Model (SEM) was used to positively inform integration…
Mixture model analysis of complex samples
Wedel, M; ter Hofstede, F; Steenkamp, JBEM
1998-01-01
We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi
Future of human models for crash analysis
Wismans, J.S.H.M.; Happee, R.; Hoof, J.F.A.M. van; Lange, R. de
2001-01-01
In the crash safety field mathematical models can be applied in practically all area's of research and development including: reconstruction of actual accidents, design (CAD) of the crash response of vehicles, safety devices and roadside facilities and in support of human impact biomechanical studie
An Analysis of Student Model Portability
Valdés Aguirre, Benjamín; Ramírez Uresti, Jorge A.; du Boulay, Benedict
2016-01-01
Sharing user information between systems is an area of interest for every field involving personalization. Recommender Systems are more advanced in this aspect than Intelligent Tutoring Systems (ITSs) and Intelligent Learning Environments (ILEs). A reason for this is that the user models of Intelligent Tutoring Systems and Intelligent Learning…
Review and analysis of biomass gasification models
Puig Arnavat, Maria; Bruno, Joan Carles; Coronas, Alberto
2010-01-01
The use of biomass as a source of energy has been further enhanced in recent years and special attention has been paid to biomass gasification. Due to the increasing interest in biomass gasification, several models have been proposed in order to explain and understand this complex process, and th...
A patient-centered care ethics analysis model for rehabilitation.
Hunt, Matthew R; Ells, Carolyn
2013-09-01
There exists a paucity of ethics resources tailored to rehabilitation. To help fill this ethics resource gap, the authors developed an ethics analysis model specifically for use in rehabilitation care. The Patient-Centered Care Ethics Analysis Model for Rehabilitation is a process model to guide careful moral reasoning for particularly complex or challenging matters in rehabilitation. The Patient-Centered Care Ethics Analysis Model for Rehabilitation was developed over several iterations, with feedback at different stages from rehabilitation professionals and bioethics experts. Development of the model was explicitly informed by the theoretical grounding of patient-centered care and the context of rehabilitation, including the International Classification of Functioning, Disability and Health. Being patient centered, the model encourages (1) shared control of consultations, decisions about interventions, and management of the health problems with the patient and (2) understanding the patient as a whole person who has individual preferences situated within social contexts. Although the major process headings of the Patient-Centered Care Ethics Analysis Model for Rehabilitation resemble typical ethical decision-making and problem-solving models, the probes under those headings direct attention to considerations relevant to rehabilitation care. The Patient-Centered Care Ethics Analysis Model for Rehabilitation is a suitable tool for rehabilitation professionals to use (in real time, for retrospective review, and for training purposes) to help arrive at ethical outcomes.
Coverage Modeling and Reliability Analysis Using Multi-state Function
无
2007-01-01
Fault tree analysis is an effective method for predicting the reliability of a system. It gives a pictorial representation and logical framework for analyzing the reliability. Also, it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems. In this paper, we propose a new general coverage model (GCM) based on hardware independent faults. Using this model, an effective software tool can be constructed to detect, locate and recover fault from the faulty system. This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).
Evaluation of Cost Models and Needs & Gaps Analysis
Kejser, Ulla Bøgvad
2014-01-01
his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... for amore efficient use of resources for digital curation. To facilitate and clarify the model evaluation the report first outlines a basic terminology and a generaldescription of the characteristics of cost and benefit models.The report then describes how the ten current and emerging cost and benefit...... they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able...
ERIC Clearinghouse on Educational Management, Eugene, OR.
This review analyzes current research trends in the application of planning models to broad educational systems. Planning models reviewed include systems approach models, simulation models, operational gaming, linear programing, Markov chain analysis, dynamic programing, and queuing techniques. A 77-item bibliography of recent literature is…
Employing Power Graph Analysis to Facilitate Modeling Molecular Interaction Networks
Momchil Nenov
2015-04-01
Full Text Available Mathematical modeling is used to explore and understand complex systems ranging from weather patterns to social networks to gene-expression regulatory mechanisms. There is an upper limit to the amount of details that can be reflected in a model imposed by finite computational resources. Thus, there are methods to reduce the complexity of the modeled system to its most significant parameters. We discuss the suitability of clustering techniques, in particular Power Graph Analysis as an intermediate step of modeling.
Electrical Power Distribution and Control Modeling and Analysis
Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.
2001-01-01
This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Strategic Mobility 21: Modeling, Simulation, and Analysis
2010-04-14
Womack & Jones of the Lean Enterprise Institute (LEI) 3 . In our initial use of this methodology with Dole Foods, there were over five organizations...energy. The Value Stream Analysis Future State then designed Kaizens 3 Value Stream Mapping...principles described in this report are excerpted from ―Learning To See‖ written by James Womack & Dan Jones of the Lean Enterprise Institute (LEI). 7
Army Contracting Command Workforce Model Analysis
2010-10-04
College), and he has taught visiting seminars at American University in Cairo, and Instituto de Empresas in Madrid. Dr. Reed retired after 21 years...Advisory Panel, 2007, p. 7) not only points toward a strained workforce that lacks the requisite market expertise, but also to other factors that...mlpqdo^ar^qb=p`elli= Spyropoulos, D. (2005, March). Analysis of career progression and job performance in internal labor markets : The case of
Formal Analysis of Graphical Security Models
Aslanyan, Zaruhi
The increasing usage of computer-based systems in almost every aspects of our daily life makes more and more dangerous the threat posed by potential attackers, and more and more rewarding a successful attack. Moreover, the complexity of these systems is also increasing, including physical devices......, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical...... models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing...
Analysis of mathematical modelling on potentiometric biosensors.
Mehala, N; Rajendran, L
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.
Supplementing biomechanical modeling with EMG analysis
Lewandowski, Beth; Jagodnik, Kathleen; Crentsil, Lawton; Humphreys, Bradley; Funk, Justin; Gallo, Christopher; Thompson, William; DeWitt, John; Perusek, Gail
2016-01-01
It is well established that astronauts experience musculoskeletal deconditioning when exposed to microgravity environments for long periods of time. Spaceflight exercise is used to counteract these effects, and the Advanced Resistive Exercise Device (ARED) on the International Space Station (ISS) has been effective in minimizing musculoskeletal losses. However, the exercise devices of the new exploration vehicles will have requirements of limited mass, power and volume. Because of these limitations, there is a concern that the exercise devices will not be as effective as ARED in maintaining astronaut performance. Therefore, biomechanical modeling is being performed to provide insight on whether the small Multi-Purpose Crew Vehicle (MPCV) device, which utilizes a single-strap design, will provide sufficient physiological loading to maintain musculoskeletal performance. Electromyography (EMG) data are used to supplement the biomechanical model results and to explore differences in muscle activation patterns during exercises using different loading configurations.
Modelling Analysis of Sewage Sludge Amended Soil
Sørensen, P. B.; Carlsen, L.; Vikelsøe, J.
The topic is risk assessment of sludge supply to agricultural soil in relation to xenobiotics. A large variety of xenobiotics arrive to the wastewater treatment plant in the wastewater. Many of these components are hydrophobic and thus will accumulate in the sludge solids and are removed from...... the plant effluent. The focus in this work is the top soil as this layer is important for the fate of a xenobiotic substance due to the high biological activity. A simple model for the top soil is used where the substance is assumed homogeneously distributed as suggested in the European Union System...... for the Evaluation of Substances (EUSES). It is shown how the fraction of substance mass, which is leached, from the top soil is a simple function of the ratio between the degradation half lifetime and the adsorption coefficient. This model can be used in probabilistic risk assessment of agricultural soils...
Feature Analysis for Modeling Game Content Quality
Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
entertainment for individual game players is to tailor player experience in real-time via automatic game content generation. Modeling the relationship between game content and player preferences or affective states is an important step towards this type of game personalization. In this paper we...... analyse the relationship between level design parameters of platform games and player experience. We introduce a method to extract the most useful information about game content from short game sessions by investigating the size of game session that yields the highest accuracy in predicting players......’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...
Diagnostics and future evolution analysis of the two parametric models
Yang, Guang; Meng, Xinhe
2016-01-01
In this paper, we apply three diagnostics including $Om$, Statefinder hierarchy and the growth rate of perturbations into discriminating the two parametric models for the effective pressure with the $\\Lambda$CDM model. By using the $Om$ diagnostic, we find that both the model 1 and the model 2 can be hardly distinguished from each other as well as the $\\Lambda$CDM model in terms of 68\\% confidence level. As a supplement, by using the Statefinder hierarchy diagnostics and the growth rate of perturbations, we discover that not only can our two parametric models be well distinguished from $\\Lambda$CDM model, but also, by comparing with $Om$ diagnostic, the model 1 and the model 2 can be distinguished better from each other. In addition, we also explore the fate of universe evolution of our two models by means of the rip analysis.
Stochastic modeling and analysis of telecoms networks
Decreusefond, Laurent
2012-01-01
This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an
Analysis of modeling errors in system identification
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
This paper is concerned with the identification of a system in the presence of several error sources. Following some basic definitions, the notion of 'near-equivalence in probability' is introduced using the concept of near-equivalence between a model and process. Necessary and sufficient conditions for the identifiability of system parameters are given. The effect of structural error on the parameter estimates for both deterministic and stochastic cases are considered.
Data perturbation analysis of a linear model
无
2000-01-01
The linear model features were carefully studied in the cases of data perturbation and mean shift perturbation.Some important features were also proved mathematically. The results show that the mean shift perturbation is equivalentto the data perturbation, that is, adding a parameter to an observation equation means that this set of data is deleted fromthe data set. The estimate of this parameter is its predicted residual in fact
Modeling and analysis of caves using voxelization
Szeifert, Gábor; Szabó, Tivadar; Székely, Balázs
2014-05-01
Although there are many ways to create three dimensional representations of caves using modern information technology methods, modeling of caves has been challenging for researchers for a long time. One of these promising new alternative modeling methods is using voxels. We are using geodetic measurements as an input for our voxelization project. These geodetic underground surveys recorded the azimuth, altitude and distance of corner points of cave systems relative to each other. The diameter of each cave section is estimated from separate databases originating from different surveys. We have developed a simple but efficient method (it covers more than 99.9 % of the volume of the input model on the average) to convert these vector-type datasets to voxels. We have also developed software components to make visualization of the voxel and vector models easier. Since each cornerpoint position is measured relative to another cornerpoints positions, propagation of uncertainties is an important issue in case of long caves with many separate sections. We are using Monte Carlo simulations to analyze the effect of the error of each geodetic instrument possibly involved in a survey. Cross-sections of the simulated three dimensional distributions show, that even tiny uncertainties of individual measurements can result in high variation of positions that could be reduced by distributing the closing errors if such data are available. Using the results of our simulations, we can estimate cave volume and the error of the calculated cave volume depending on the complexity of the cave. Acknowledgements: the authors are grateful to Ariadne Karst and Cave Exploring Association and State Department of Environmental and Nature Protection of the Hungarian Ministry of Rural Development, Department of National Parks and Landscape Protection, Section Landscape and Cave Protection and Ecotourism for providing the cave measurement data. BS contributed as an Alexander von Humboldt Research