Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Design and analysis of Q-RT-PCR assays for haematological malignancies using mixed effects models
DEFF Research Database (Denmark)
Bøgsted, Martin; Mandrup, Charlotte; Petersen, Anders
2009-01-01
research use and needs qualit control for accuracy and precision. Especially the identification of experimental variations and statistical analysis has recently created discussions. The standard analytical technique is to use the Delta-Delta-Ct method. Although this method accounts for sample specific...... developed based on a linear mixed effects model for factorial designs. The model consists of an analysis of variance where the variation of each fixed effect of interest and identified experimental and biological nuisance variations are split. Hereby it accounts for varying efficiency, inhomogeneous......The recent WHO classification of haematological malignancies includes detection of genetic abnormalities with rognostic significance. Consequently, an increasing number of specific real-time quantitative reverse transcription polymerase chain reaction (Q-RT-PCR) based assays are in clinical...
Kriging with mixed effects models
Directory of Open Access Journals (Sweden)
Alessio Pollice
2007-10-01
Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.
Kliem, Sören; Kröger, Christoph; Kosfelder, Joachim
2010-12-01
At present, the most frequently investigated psychosocial intervention for borderline personality disorder (BPD) is dialectical behavior therapy (DBT). We conducted a meta-analysis to examine the efficacy and long-term effectiveness of DBT. Systematic bibliographic research was undertaken to find relevant literature from online databases (PubMed, PsycINFO, PsychSpider, Medline). We excluded studies in which patients with diagnoses other than BPD were treated, the treatment did not comprise all components specified in the DBT manual or in the suggestions for inpatient DBT programs, patients failed to be diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders, and the intervention group comprised fewer than 10 patients. Using a mixed-effect hierarchical modeling approach, we calculated global effect sizes and effect sizes for suicidal and self-injurious behaviors. Calculations of postintervention global effect sizes were based on 16 studies. Of these, 8 were randomized controlled trials (RCTs), and 8 were neither randomized nor controlled (nRCT). The dropout rate was 27.3% pre- to posttreatment. A moderate global effect and a moderate effect size for suicidal and self-injurious behaviors were found, when including a moderator for RCTs with borderline-specific treatments. There was no evidence for the influence of other moderators (e.g., quality of studies, setting, duration of intervention). A small impairment was shown from posttreatment to follow-up, including 5 RCTs only. Future research should compare DBT with other active borderline-specific treatments that have also demonstrated their efficacy using several long-term follow-up assessment points. (c) 2010 APA, all rights reserved.
Seng, Kok-Yong; Chen, Ying; Wang, Ting; Ming Chai, Adam Kian; Yuen Fun, David Chiok; Teo, Ya Shi; Sze Tan, Pearl Min; Ang, Wee Hon; Wei Lee, Jason Kai
2016-04-01
Many longitudinal studies have collected serial body core temperature (T c) data to understand thermal work strain of workers under various environmental and operational heat stress environments. This provides the opportunity for the development of mathematical models to analyse and forecast temporal T c changes across populations of subjects. Such models can reduce the need for invasive methods that continuously measure T c. This current work sought to develop a nonlinear mixed effects modelling framework to delineate the dynamic changes of T c and its association with a set of covariates of interest (e.g. heart rate, chest skin temperature), and the structure of the variability of T c in various longitudinal studies. Data to train and evaluate the model were derived from two laboratory investigations involving male soldiers who participated in either a 12 (N = 18) or 15 km (N = 16) foot march with varied clothing, load and heat acclimatisation status. Model qualification was conducted using nonparametric bootstrap and cross validation procedures. For cross validation, the trajectory of a new subject's T c was simulated via Bayesian maximum a posteriori estimation when using only the baseline T c or using the baseline T c as well as measured T c at the end of every work (march) phase. The final model described T c versus time profiles using a parametric function with its main parameters modelled as a sigmoid hyperbolic function of the load and/or chest skin temperature. Overall, T c predictions corresponded well with the measured data (root mean square deviation: 0.16 °C), and compared favourably with those provided by two recently published Kalman filter models.
Kliem, Soren; Kroger, Christoph; Kosfelder, Joachim
2010-01-01
Objective: At present, the most frequently investigated psychosocial intervention for borderline personality disorder (BPD) is dialectical behavior therapy (DBT). We conducted a meta-analysis to examine the efficacy and long-term effectiveness of DBT. Method: Systematic bibliographic research was undertaken to find relevant literature from online…
Ding, Kuan-Fu; Petricoin, Emanuel F; Finlay, Darren; Yin, Hongwei; Hendricks, William P D; Sereduk, Chris; Kiefer, Jeffrey; Sekulic, Aleksandar; LoRusso, Patricia M; Vuori, Kristiina; Trent, Jeffrey M; Schork, Nicholas J
2018-01-12
Cancer cell lines are often used in high throughput drug screens (HTS) to explore the relationship between cell line characteristics and responsiveness to different therapies. Many current analysis methods infer relationships by focusing on one aspect of cell line drug-specific dose-response curves (DRCs), the concentration causing 50% inhibition of a phenotypic endpoint (IC 50 ). Such methods may overlook DRC features and do not simultaneously leverage information about drug response patterns across cell lines, potentially increasing false positive and negative rates in drug response associations. We consider the application of two methods, each rooted in nonlinear mixed effects (NLME) models, that test the relationship relationships between estimated cell line DRCs and factors that might mitigate response. Both methods leverage estimation and testing techniques that consider the simultaneous analysis of different cell lines to draw inferences about any one cell line. One of the methods is designed to provide an omnibus test of the differences between cell line DRCs that is not focused on any one aspect of the DRC (such as the IC 50 value). We simulated different settings and compared the different methods on the simulated data. We also compared the proposed methods against traditional IC 50 -based methods using 40 melanoma cell lines whose transcriptomes, proteomes, and, importantly, BRAF and related mutation profiles were available. Ultimately, we find that the NLME-based methods are more robust, powerful and, for the omnibus test, more flexible, than traditional methods. Their application to the melanoma cell lines reveals insights into factors that may be clinically useful.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Mixed-effects regression models in linguistics
Heylen, Kris; Geeraerts, Dirk
2018-01-01
When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed. In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
A Linear Mixed-Effects Model of Wireless Spectrum Occupancy
Directory of Open Access Journals (Sweden)
Pagadarai Srikanth
2010-01-01
Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.
DEFF Research Database (Denmark)
Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik
2004-01-01
The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...
Kliem, Sören; Kröger, Christoph
2013-11-01
Post-traumatic stress disorder (PTSD) is of great interest to public health, due to the high burden it places on both the individual and society. We meta-analyzed randomized-controlled trials to examine the effectiveness of early trauma-focused cognitive-behavioral treatment (TFCBT) for preventing chronic PTSD. Systematic bibliographic research was undertaken to find relevant literature from on-line databases (Pubmed, PsycINFO, Psyndex, Medline). Using a mixed-effect approach, we calculated effect sizes (ES) for the PTSD diagnoses (main outcome) as well as PTSD and depressive symptoms (secondary outcomes), respectively. Calculations of ES from pre-intervention to first follow-up assessment were based on 10 studies. A moderate effect (ES = 0.54) was found for the main outcome, whereas ES for secondary outcomes were predominantly small (ES = 0.27-0.45). The ES for the main outcome decreased to small (ES = 0.34) from first follow-up to long-term follow-up assessment. The mean dropout rate was 16.7% pre- to post-treatment. There was evidence for the impact of moderators on different outcomes (e.g., the number of sessions on PTSD symptoms). Future studies should include survivors of other trauma types (e.g., burn injuries) rather than predominantly survivors of accidents and physical assault, and should compare early TFCBT with other interventions that previously demonstrated effectiveness. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liu, Zun-Lei; Yuan, Xing-Wei; Yan, Li-Ping; Yang, Lin-Lin; Cheng, Jia-Hua
2013-09-01
By using the 2008-2010 investigation data about the body condition of small yellow croaker in the offshore waters of southern Yellow Sea (SYS), open waters of northern East China Sea (NECS), and offshore waters of middle East China Sea (MECS), this paper analyzed the spatial heterogeneity of body length-body mass of juvenile and adult small yellow croakers by the statistical approaches of mean regression model and quantile regression model. The results showed that the residual standard errors from the analysis of covariance (ANCOVA) and the linear mixed-effects model were similar, and those from the simple linear regression were the highest. For the juvenile small yellow croakers, their mean body mass in SYS and NECS estimated by the mixed-effects mean regression model was higher than the overall average mass across the three regions, while the mean body mass in MECS was below the overall average. For the adult small yellow croakers, their mean body mass in NECS was higher than the overall average, while the mean body mass in SYS and MECS was below the overall average. The results from quantile regression indicated the substantial differences in the allometric relationships of juvenile small yellow croakers between SYS, NECS, and MECS, with the estimated mean exponent of the allometric relationship in SYS being 2.85, and the interquartile range being from 2.63 to 2.96, which indicated the heterogeneity of body form. The results from ANCOVA showed that the allometric body length-body mass relationships were significantly different between the 25th and 75th percentile exponent values (F=6.38, df=1737, P<0.01) and the 25th percentile and median exponent values (F=2.35, df=1737, P=0.039). The relationship was marginally different between the median and 75th percentile exponent values (F=2.21, df=1737, P=0.051). The estimated body length-body mass exponent of adult small yellow croakers in SYS was 3.01 (10th and 95th percentiles = 2.77 and 3.1, respectively). The
Longitudinal mixed-effects models for latent cognitive function
van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela
2015-01-01
A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita
2016-05-15
Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes. Copyright © 2016. Published by Elsevier B.V.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
Latent Fundamentals Arbitrage with a Mixed Effects Factor Model
Directory of Open Access Journals (Sweden)
Andrei Salem Gonçalves
2012-09-01
Full Text Available We propose a single-factor mixed effects panel data model to create an arbitrage portfolio that identifies differences in firm-level latent fundamentals. Furthermore, we show that even though the characteristics that affect returns are unknown variables, it is possible to identify the strength of the combination of these latent fundamentals for each stock by following a simple approach using historical data. As a result, a trading strategy that bought the stocks with the best fundamentals (strong fundamentals portfolio and sold the stocks with the worst ones (weak fundamentals portfolio realized significant risk-adjusted returns in the U.S. market for the period between July 1986 and June 2008. To ensure robustness, we performed sub period and seasonal analyses and adjusted for trading costs and we found further empirical evidence that using a simple investment rule, that identified these latent fundamentals from the structure of past returns, can lead to profit.
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...
Directory of Open Access Journals (Sweden)
Petras Rupšys
2015-01-01
Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
A brief introduction to regression designs and mixed-effects modelling by a recent convert
DEFF Research Database (Denmark)
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Estimation and Inference for Very Large Linear Mixed Effects Models
Gao, K.; Owen, A. B.
2016-01-01
Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...
Performance of nonlinear mixed effects models in the presence of informative dropout.
Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H
2015-01-01
Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.
A brief introduction to mixed effects modelling and multi-model inference in ecology.
Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.
Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He
2005-01-01
Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...
Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo
2015-10-15
For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Ehrlinger, John; Li, Liang; Ishwaran, Hemant; Parides, Michael K
2018-01-01
Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
International Nuclear Information System (INIS)
Rupšys, P.
2015-01-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE
Energy Technology Data Exchange (ETDEWEB)
Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Examples of mixed-effects modeling with crossed random effects and with binomial data
Quené, H.; van den Bergh, H.
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not
Mixed-effects height–diameter models for ten conifers in the inland ...
African Journals Online (AJOL)
To demonstrate the utility of mixed-effects height–diameter models when conducting forest inventories, mixedeffects height–diameter models are presented for several commercially and ecologically important conifers in the inland Northwest of the USA. After obtaining height–diameter measurements from a plot/stand of ...
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
[Application of Mixed-effect Model in PMI Estimation by Vitreous Humor].
Yang, M Z; Li, H J; Zhang, T Y; Ding, Z J; Wu, S F; Qiu, X G; Liu, Q
2018-02-01
To test the changes of the potassium （K⁺） and magnesium （Mg²⁺） concentrations in vitreous humor of rabbits along with postmortem interval （PMI） under different temperatures, and explore the feasibility of PMI estimation using mixed-effect model. After sacrifice, rabbit carcasses were preserved at 5 ℃, 15 ℃, 25 ℃ and 35 ℃, and 80-100 μL of vitreous humor was collected by the double-eye alternating micro-sampling method at every 12 h. The concentrations of K⁺ and Mg²⁺ in vitreous humor were measured by a biochemical-immune analyser. The mixed-effect model was used to perform analysis and fitting, and established the equations for PMI estimation. The data detected from the samples that were stoned at 10 ℃, 20 ℃ and 30 ℃ with 20, 40 and 65 h were used to validate the equations of PMI estimation. The concentrations of K⁺ and Mg²⁺ [f（ x , y ）] in vitreous humor of rabbits under different temperature increased along with PMI （ x ）. The relative equations of K⁺ and Mg²⁺ concentration with PMI and temperature under 5 ℃~35 ℃ were f K⁺ （ x , y ）=3.413 0+0.309 2 x +0.337 6 y +0.010 83 xy -0.002 47 x ² （ P PMI estimation by K⁺ and Mg²⁺ was in 10 h when PMI was between 0 to 40 h, and the time of deviation was in 21 h when PMI was between 40 to 65 h. the ambient temperature range of 5 ℃-35 ℃, the mixed-effect model based on temperature and vitreous humor substance concentrations can provide a new method for the practical application of vitreous humor chemicals for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
DEFF Research Database (Denmark)
Thorsted, A; Thygesen, P; Agersø, H
2016-01-01
BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...... was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... s.c. administration was over predicted. After correction of the human s.c. absorption model, the induction model for IGF-1 well described the human PKPD data. CONCLUSIONS: A translational mechanistic PKPD model for rhGH was successfully developed from experimental rat data. The model links...
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek
2008-01-01
Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008
International Nuclear Information System (INIS)
Zhenping Li; Close, F.E.
1990-03-01
The photo and electroproduction of baryon resonances has been calculated using the Constituent Quark Model with chromodynamics consistent with O(υ 2 /c 2 ) for the quarks. We find that the successes of the nonrelativistic quark model are preserved, some problems are removed and that QCD mixing effects may become important with increasing q 2 in electroproduction. For the first time both spectroscopy and transitions receive a unified treatment with a single set of parameters. (author)
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Linear mixed-effects models for central statistical monitoring of multicenter clinical trials
Desmet, L.; Venet, D.; Doffagne, E.; Timmermans, C.; BURZYKOWSKI, Tomasz; LEGRAND, Catherine; BUYSE, Marc
2014-01-01
Multicenter studies are widely used to meet accrual targets in clinical trials. Clinical data monitoring is required to ensure the quality and validity of the data gathered across centers. One approach to this end is central statistical monitoring, which aims at detecting atypical patterns in the data by means of statistical methods. In this context, we consider the simple case of a continuous variable, and we propose a detection procedure based on a linear mixed-effects model to detect locat...
Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA
Directory of Open Access Journals (Sweden)
Lara Dutra Silva
2017-12-01
Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
A multilevel nonlinear mixed-effects approach to model growth in pigs
DEFF Research Database (Denmark)
Strathe, Anders Bjerring; Danfær, Allan Christian; Sørensen, H.
2010-01-01
Growth functions have been used to predict market weight of pigs and maximize return over feed costs. This study was undertaken to compare 4 growth functions and methods of analyzing data, particularly one that considers nonlinear repeated measures. Data were collected from an experiment with 40...... pigs maintained from birth to maturity and their BW measured weekly or every 2 wk up to 1,007 d. Gompertz, logistic, Bridges, and Lopez functions were fitted to the data and compared using information criteria. For each function, a multilevel nonlinear mixed effects model was employed because....... Furthermore, studies should consider adding continuous autoregressive process when analyzing nonlinear mixed models with repeated measures....
Directory of Open Access Journals (Sweden)
Hae Kyung Im
2012-02-01
Full Text Available The International HapMap project has made publicly available extensive genotypic data on a number of lymphoblastoid cell lines (LCLs. Building on this resource, many research groups have generated a large amount of phenotypic data on these cell lines to facilitate genetic studies of disease risk or drug response. However, one problem that may reduce the usefulness of these resources is the biological noise inherent to cellular phenotypes. We developed a novel method, termed Mixed Effects Model Averaging (MEM, which pools data from multiple sources and generates an intrinsic cellular growth rate phenotype. This intrinsic growth rate was estimated for each of over 500 HapMap cell lines. We then examined the association of this intrinsic growth rate with gene expression levels and found that almost 30% (2,967 out of 10,748 of the genes tested were significant with FDR less than 10%. We probed further to demonstrate evidence of a genetic effect on intrinsic growth rate by determining a significant enrichment in growth-associated genes among genes targeted by top growth-associated SNPs (as eQTLs. The estimated intrinsic growth rate as well as the strength of the association with genetic variants and gene expression traits are made publicly available through a cell-based pharmacogenomics database, PACdb. This resource should enable researchers to explore the mediating effects of proliferation rate on other phenotypes.
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Directory of Open Access Journals (Sweden)
A. Cabezas
2010-08-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data
Zhou, Lan
2010-03-01
Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.
Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica
2015-02-01
The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.
Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M
2014-05-01
Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.
A Bayesian Approach to Functional Mixed Effect Modeling for Longitudinal Data with Binomial Outcomes
Kliethermes, Stephanie; Oleson, Jacob
2014-01-01
Longitudinal growth patterns are routinely seen in medical studies where individual and population growth is followed over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear, quadratic); however, these relationships may not accurately capture growth over time. Functional mixed effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well-developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible and thus estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. PMID:24723495
Kliethermes, Stephanie; Oleson, Jacob
2014-08-15
Longitudinal growth patterns are routinely seen in medical studies where individual growth and population growth are followed up over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear and quadratic); however, these relationships may not accurately capture growth over time. Functional mixed-effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible, and thus, estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. Copyright © 2014 John Wiley & Sons, Ltd.
Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.
Duffull, Stephen B; Hooker, Andrew C
2017-12-01
Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Liu, Kai; Wang, Jiangbo; Yamamoto, Toshiyuki; Morikawa, Takayuki
2016-01-01
Highlights: • The impacts of driving heterogeneity on EVs’ energy efficiency are examined. • Several multilevel mixed-effects regression models are proposed and compared. • The most reasonable nested structure is extracted from the long term GPS data. • Proposed model improves the energy estimation accuracy by 7.5%. - Abstract: To improve the accuracy of estimation of the energy consumption of electric vehicles (EVs) and to enable the alleviation of range anxiety through the introduction of EV charging stations at suitable locations for the near future, multilevel mixed-effects linear regression models were used in this study to estimate the actual energy efficiency of EVs. The impacts of the heterogeneity in driving behaviour among various road environments and traffic conditions on EV energy efficiency were extracted from long-term daily trip-based energy consumption data, which were collected over 12 months from 68 in-use EVs in Aichi Prefecture in Japan. Considering the variations in energy efficiency associated with different types of EV ownership, different external environments, and different driving habits, a two-level random intercept model, three two-level mixed-effects models, and two three-level mixed-effects models were developed and compared. The most reasonable nesting structure was determined by comparing the models, which were designed with different nesting structures and different random variance component specifications, thereby revealing the potential correlations and non-constant variability of the energy consumption per kilometre (ECPK) and improving the estimation accuracy by 7.5%.
Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A
2016-11-01
Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Energy Technology Data Exchange (ETDEWEB)
Yang, B.W.; Zhang, H.; Han, B.; Zha, Y.D.; Shan, J.Q. [Xi' an Jiaotong Univ. (China). School of Nuclear Science and Technology
2016-07-15
The thermal hydraulic characteristics of a mixing vane grid are largely dependent on the structure of key components, such as strip, spring, dimple, weld nugget, as well as the mixing vane configuration. In this paper, several types of spacer grids with different dimple shapes are modeled under subcooled boiling conditions. Prior to the application of CFD on the dimple shape analysis, the mixing effects of spacer grids were studied. After the dimple shape analysis, the side channel effect is discussed by comparing the simulation results of a 3 x 3 and a 5 x 5 spacer grid. The two phase flow CFD models in this study are validated through simple geometry showing that the calculated void fraction is in good agreement with the experimental data. The dimple comparison result shows that varying dimple structures can result in different temperatures, lateral velocities and void fraction distributions downstream of the spacer grids. Comparison of two sizes of spacer grids demonstrate that the side channel generates different flow distribution pattern in the center channel.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Using Poisson mixed-effects model to quantify transcript-level gene expression in RNA-Seq.
Hu, Ming; Zhu, Yu; Taylor, Jeremy M G; Liu, Jun S; Qin, Zhaohui S
2012-01-01
RNA sequencing (RNA-Seq) is a powerful new technology for mapping and quantifying transcriptomes using ultra high-throughput next-generation sequencing technologies. Using deep sequencing, gene expression levels of all transcripts including novel ones can be quantified digitally. Although extremely promising, the massive amounts of data generated by RNA-Seq, substantial biases and uncertainty in short read alignment pose challenges for data analysis. In particular, large base-specific variation and between-base dependence make simple approaches, such as those that use averaging to normalize RNA-Seq data and quantify gene expressions, ineffective. In this study, we propose a Poisson mixed-effects (POME) model to characterize base-level read coverage within each transcript. The underlying expression level is included as a key parameter in this model. Since the proposed model is capable of incorporating base-specific variation as well as between-base dependence that affect read coverage profile throughout the transcript, it can lead to improved quantification of the true underlying expression level. POME can be freely downloaded at http://www.stat.purdue.edu/~yuzhu/pome.html. yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online.
J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch
2008-01-01
For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
DEFF Research Database (Denmark)
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode
2009-01-01
are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE1 approximation......The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model...... development, J. Pharmacokinet. Pharmacodyn. 32 (February(l)) (2005) 109-141; C.W. Tornoe, R.V Overgaard, H. Agerso, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8...
The transition model test for serial dependence in mixed-effects models for binary data
DEFF Research Database (Denmark)
Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders
2017-01-01
Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Guha, Daipayan; Ibrahim, George M; Kertzer, Joshua D; Macdonald, R Loch
2014-11-01
Although heterogeneity exists in patient outcomes following subarachnoid hemorrhage (SAH) across different centers and countries, it is unclear which factors contribute to such disparities. In this study, the authors performed a post hoc analysis of a large international database to evaluate the association between a country's socioeconomic indicators and patient outcome following aneurysmal SAH. An analysis was performed on a database of 3552 patients enrolled in studies of tirilazad mesylate for aneurysmal SAH from 1991 to 1997, which included 162 neurosurgical centers in North and Central America, Australia, Europe, and Africa. Two primary outcomes were assessed at 3 months after SAH: mortality and Glasgow Outcome Scale (GOS) score. The association between these outcomes, nation-level socioeconomic indicators (percapita gross domestic product [GDP], population-to-neurosurgeon ratio, and health care funding model), and patientlevel covariates were assessed using a hierarchical mixed-effects logistic regression analysis. Multiple previously identified patient-level covariates were significantly associated with increased mortality and worse neurological outcome, including age, intraventricular hemorrhage, and initial neurological grade. Among national-level covariates, higher per-capita GDP (p funding model was not a significant predictor of either primary outcome. Higher per-capita gross GDP and population-to-neurosurgeon ratio were associated with improved outcome after aneurysmal SAH. The former result may speak to the availability of resources, while the latter may be a reflection of better outcomes with centralized care. Although patient clinical and radiographic phenotypes remain the primary predictors of outcome, this study shows that national socioeconomic disparities also explain heterogeneity in outcomes following SAH.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Yu-Pin Liao
2017-11-01
Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Jolling, Koen; Perez Ruixo, Juan Jose; Hemeryck, Alex; Vermeulen, An; Greway, Tony
2005-04-01
The aim of this study was to develop a population pharmacokinetic model for interspecies allometric scaling of pegylated r-HuEPO (PEG-EPO) pharmacokinetics to man. A total of 927 serum concentrations from 193 rats, 6 rabbits, 34 monkeys, and 9 dogs obtained after a single dose of PEG-EPO, administered by the i.v. (dose range: 12.5-550 microg/kg) and s.c. (dose range: 12.5-500 microg/kg) routes, were pooled in this analysis. An open two-compartment model with first-order absorption and lag time (Tlag) and linear elimination from the central compartment was fitted to the data using the NONMEM V software. Body weight (WT) was used as a scaling factor and the effect of brain weight (BW), sex, and pregnancy status on the pharmacokinetic parameters was investigated. The final model was evaluated by means of a non-parametric bootstrap analysis and used to predict the PEG-EPO pharmacokinetic parameters in healthy male subjects. The systemic clearance (CL) in males was estimated to be 4.08WT1.030xBW-0.345 ml/h. In females, the CL was 90.7% of the CL in males. The volumes of the central (Vc) and the peripheral (Vp) compartment were characterized as 57.8WT0.959 ml, and 48.1WT1.150 ml, respectively. Intercompartmental flow was estimated at 2.32WT0.930 ml/h. Absorption rate constant (Ka) was estimated at 0.0538WT-0.149. The absolute s.c. bioavailability F was calculated at 52.5, 80.2, and 49.4% in rat, monkey, and dog, respectively. The interindividual variability in the population pharmacokinetic parameters was fairly low (parametric bootstrap confirmed the accuracy of the NONMEM estimates. The mean model predicted pharmacokinetic parameters in healthy male subjects of 70 kg were estimated at: CL: 26.2 ml/h; Vc: 3.6l; Q: 286 l/h; Vp: 6.9l, and Ka: 0.031 h-1. The population pharmacokinetic model developed was appropriate to describe the time course of PEG-EPO serum concentrations and their variability in different species. The model predicted pharmacokinetics of PEG-EPO in
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Zuberer, Agnieszka; Minder, Franziska; Brandeis, Daniel; Drechsler, Renate
2018-01-01
Neurofeedback (NF) has gained increasing popularity as a training method for children and adults with attention deficit hyperactivity disorder (ADHD). However, it is unclear to what extent children learn to regulate their brain activity and in what way NF learning may be affected by subject- and treatment-related factors. In total, 48 subjects with ADHD (age 8.5-16.5 years; 16 subjects on methylphenidate (MPH)) underwent 15 double training sessions of NF in either a clinical or a school setting. Four mixed-effects models were employed to analyze learning: training within-sessions, across-sessions, with continuous feedback, and with transfer in which performance feedback is delayed. Age and MPH affected the NF performance in all models. Cross-session learning in the feedback condition was mainly moderated by age and MPH, whereas NF learning in the transfer condition was mainly boosted by MPH. Apart from IQ and task types, other subject-related or treatment-related effects were unrelated to NF learning. This first study analyzing moderators of NF learning in ADHD with a mixed-effects modeling approach shows that NF performance is moderated differentially by effects of age and MPH depending on the training task and time window. Future studies may benefit from using this approach to analyze NF learning and NF specificity. The trial name Neurofeedback and Computerized Cognitive Training in Different Settings for Children and Adolescents With ADHD is registered with NCT02358941.
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria
Directory of Open Access Journals (Sweden)
Little Francesca
2010-02-01
Full Text Available Abstract Background Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Methods Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. Results A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10, and an empirical patient category were the co-variates that were found to improve the model. Conclusions A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12, and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up.
Rast, Philippe; Hofer, Scott M.; Sparks, Catharine
2012-01-01
A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…
Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis
2005-04-01
Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
DEFF Research Database (Denmark)
Thorsted, Anders; Thygesen, Peter; Agersø, Henrik
2016-01-01
was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... and validated through simulations relative to patient data. KEY RESULTS: The final model described rhGH PK as a two compartmental model with parallel linear and non-linear elimination terms, parallel first-order absorption with a total s.c. bioavailability of 87% in rats. Induction of IGF-1 was described...... by an indirect response model with stimulation of kin and related to rhGH exposure through an Emax relationship. Increase in bodyweight was directly linked to individual concentrations of IGF-1 by a linear relation. The scaled model provided robust predictions of human systemic PK of rhGH, but exposure following...
A nonlinear mixed-effects model for simultaneous smoothing and registration of functional data
DEFF Research Database (Denmark)
Raket, Lars Lau; Sommer, Stefan Horst; Markussen, Bo
2014-01-01
We consider misaligned functional data, where data registration is necessary for proper statistical analysis. This paper proposes to treat misalignment as a nonlinear random effect, which makes simultaneous likelihood inference for horizontal and vertical effects possible. By simultaneously fitti...
Justin S. Crotteau; Martin W. Ritchie; J. Morgan. Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Zamuner, Stefano; Gomeni, Roberto; Bye, Alan
2002-01-01
Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach
Directory of Open Access Journals (Sweden)
Joachim Almquist
Full Text Available The last decade has seen a rapid development of experimental techniques that allow data collection from individual cells. These techniques have enabled the discovery and characterization of variability within a population of genetically identical cells. Nonlinear mixed effects (NLME modeling is an established framework for studying variability between individuals in a population, frequently used in pharmacokinetics and pharmacodynamics, but its potential for studies of cell-to-cell variability in molecular cell biology is yet to be exploited. Here we take advantage of this novel application of NLME modeling to study cell-to-cell variability in the dynamic behavior of the yeast transcription repressor Mig1. In particular, we investigate a recently discovered phenomenon where Mig1 during a short and transient period exits the nucleus when cells experience a shift from high to intermediate levels of extracellular glucose. A phenomenological model based on ordinary differential equations describing the transient dynamics of nuclear Mig1 is introduced, and according to the NLME methodology the parameters of this model are in turn modeled by a multivariate probability distribution. Using time-lapse microscopy data from nearly 200 cells, we estimate this parameter distribution according to the approach of maximizing the population likelihood. Based on the estimated distribution, parameter values for individual cells are furthermore characterized and the resulting Mig1 dynamics are compared to the single cell times-series data. The proposed NLME framework is also compared to the intuitive but limited standard two-stage (STS approach. We demonstrate that the latter may overestimate variabilities by up to almost five fold. Finally, Monte Carlo simulations of the inferred population model are used to predict the distribution of key characteristics of the Mig1 transient response. We find that with decreasing levels of post-shift glucose, the transient
Directory of Open Access Journals (Sweden)
Edward M Grant
Full Text Available We examined associations among longitudinal, multilevel variables and girls' physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland contributed participants from 8th (2009 to 11th grade (2011 (n=561. Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables; height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school's physical activity environment, affect adolescent girl's moderate to vigorous physical activity longitudinally.
Lin, Xiaolei; Mermelstein, Robin J; Hedeker, Donald
2018-06-15
Ecological momentary assessment studies usually produce intensively measured longitudinal data with large numbers of observations per unit, and research interest is often centered around understanding the changes in variation of people's thoughts, emotions and behaviors. Hedeker et al developed a 2-level mixed effects location scale model that allows observed covariates as well as unobserved variables to influence both the mean and the within-subjects variance, for a 2-level data structure where observations are nested within subjects. In some ecological momentary assessment studies, subjects are measured at multiple waves, and within each wave, subjects are measured over time. Li and Hedeker extended the original 2-level model to a 3-level data structure where observations are nested within days and days are then nested within subjects, by including a random location and scale intercept at the intermediate wave level. However, the 3-level random intercept model assumes constant response change rate for both the mean and variance. To account for changes in variance across waves, as well as clustering attributable to waves, we propose a more comprehensive location scale model that allows subject heterogeneity at baseline as well as across different waves, for a 3-level data structure where observations are nested within waves and waves are then further nested within subjects. The model parameters are estimated using Markov chain Monte Carlo methods. We provide details on the Bayesian estimation approach and demonstrate how the Stan statistical software can be used to sample from the desired distributions and achieve consistent estimates. The proposed model is validated via a series of simulation studies. Data from an adolescent smoking study are analyzed to demonstrate this approach. The analyses clearly favor the proposed model and show significant subject heterogeneity at baseline as well as change over time, for both mood mean and variance. The proposed 3-level
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Han S
2017-07-01
Full Text Available Seunghoon Han,1,2 Gun Hyung Na,3 Dong-Goo Kim3 1Department of Pharmacology, College of Medicine, The Catholic University of Korea, Seocho-gu, Seoul, South Korea; 2Pharmacometrics Institute for Practical Education and Training, The Catholic University of Korea, Seocho-gu, Seoul, South Korea; 3Department of Surgery, Seoul St Mary’s Hospital, The Catholic University of Korea, Seocho-gu, Seoul, South Korea Background: Although individualized dosage regimens for anti-hepatitis B immunoglobulin (HBIG therapy have been suggested, the pharmacokinetic profile and factors influencing the basis for individualization have not been sufficiently assessed. We sought to evaluate the pharmacokinetic characteristics of anti-HBIG quantitatively during the first 6 months after liver transplantation. Methods: Identical doses of 10,000 IU HBIG were administered to adult liver transplant recipients daily during the first week, weekly thereafter until 28 postoperative days, and monthly thereafter. Blood samples were obtained at days 1, 7, 28, 84, and 168 after transplantation. Plasma HBIG titer was quantified using 4 different immunoassay methods. The titer determined by each analytical method was used for mixed-effect modeling, and the most precise results were chosen. Simulations were performed to predict the plausible immunoglobulin maintenance dose. Results: HBIG was eliminated from the body most rapidly in the immediate post-transplant period, and the elimination rate gradually decreased thereafter. In the early post-transplant period, patients with higher DNA titer tend to have lower plasma HBIG concentrations. The maintenance doses required to attain targets in 90%, 95%, and 99% of patients were ~15.3, 18.2, and 25.1 IU, respectively, multiplied by the target trough level (in IU/L. Conclusion: The variability (explained and unexplained in HBIG pharmacokinetics was relatively larger in the early post-transplant period. Dose individualization based upon
Directory of Open Access Journals (Sweden)
Johanna Petersen
Full Text Available Time out-of-home has been linked with numerous health outcomes, including cognitive decline, poor physical ability and low emotional state. Comprehensive characterization of this important health metric would potentially enable objective monitoring of key health outcomes. The objective of this study is to determine the relationship between time out-of-home and cognitive status, physical ability and emotional state.Participants included 85 independent older adults, age 65-96 years (M = 86.36; SD = 6.79 who lived alone, from the Intelligent Systems for Assessing Aging Changes (ISAAC and the ORCATECH Life Laboratory cohorts. Factors hypothesized to affect time out-of-home were assessed on three different temporal levels: yearly (cognitive status, loneliness, clinical walking speed, weekly (pain and mood or daily (time out-of-home, in-home walking speed, weather, and season. Subject characteristics including age, race, and gender were assessed at baseline. Total daily time out-of-home in hours was assessed objectively and unobtrusively for up to one year using an in-home activity sensor platform. A longitudinal tobit mixed effects regression model was used to relate daily time out-of-home to cognitive status, physical ability and emotional state. More hours spend outside the home was associated with better cognitive function as assessed using the Clinical Dementia Rating (CDR Scale, where higher scores indicate lower cognitive function (βCDR = -1.69, p<0.001. More hours outside the home was also associated with superior physical ability (βPain = -0.123, p<0.001 and improved emotional state (βLonely = -0.046, p<0.001; βLow mood = -0.520, p<0.001. Weather, season, and weekday also affected the daily time out-of-home.These results suggest that objective longitudinal monitoring of time out-of-home may enable unobtrusive assessment of cognitive, physical and emotional state. In addition, these results indicate that the factors affecting out
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Tan, Andy S L; Nagler, Rebekah H; Hornik, Robert C; DeMichele, Angela
2015-07-01
This study describes how cancer survivors' information needs about recurrence, late effects, and family risks of cancer evolve over the course of their survivorship period. Three annual surveys were conducted from 2006 to 2008 in a cohort of Pennsylvania cancer survivors diagnosed with colon, breast, or prostate cancer in 2005 (round 1, N = 2,013; round 2, N = 1,293; round 3, N = 1,128). Outcomes were information seeking about five survivorship topics. Key predictors were survey round, cancer diagnosis, and the interaction between these variables. Mixed-effects logistic regression analyses were performed to predict information seeking about each topic, adjusting for demographic variables, clinical characteristics, and clustering of repeated observations within individuals. Information seeking about reducing risks of cancer recurrence was the most frequently reported topic across survivors and over time. Breast cancer survivors were more likely to seek about survivorship topics at round 1 compared with other survivors. In general, information seeking declined over time, but cancer-specific patterns emerged: the decline was sharpest for breast cancer survivors, whereas in later years female colon cancer survivors actually sought more information (about how to reduce the risk of family members getting colon cancer or a different cancer). Cancer survivors' information needs varied over time depending on the topic, and these trends differed by cancer type. Clinicians may need to intervene at distinct points during the survivorship period with information to address concerns about cancer recurrence, late effects, and family members' risks. ©2015 American Association for Cancer Research.
Band mixing effects in mean field theories
International Nuclear Information System (INIS)
Kuyucak, S.; Morrison, I.
1989-01-01
The 1/N expansion method, which is an angular momentum projected mean field theory, is used to investigate the nature of electromagnetic transitions in the interacting boson model (IBM). Conversely, comparison with the exact IBM results sheds light on the range of validity of the mean field theory. It is shown that the projected mean field results for the E2 transitions among the ground, β and γ bands are incomplete for the spin dependent terms and it is essential to include band mixing effect for a correct (Mikhailov) analysis of E2 data. The algebraic expressions derived are general and will be useful in the analysis of experimental data in terms of both the sd and sdg boson models. 17 refs., 7 figs., 8 tabs
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2015-06-15
We develop a new modeling approach to enhance a recently proposed method to detect increases of contrast-enhancing lesions (CELs) on repeated magnetic resonance imaging, which have been used as an indicator for potential adverse events in multiple sclerosis clinical trials. The method signals patients with unusual increases in CEL activity by estimating the probability of observing CEL counts as large as those observed on a patient's recent scans conditional on the patient's CEL counts on previous scans. This conditional probability index (CPI), computed based on a mixed-effect negative binomial regression model, can vary substantially depending on the choice of distribution for the patient-specific random effects. Therefore, we relax this parametric assumption to model the random effects with an infinite mixture of beta distributions, using the Dirichlet process, which effectively allows any form of distribution. To our knowledge, no previous literature considers a mixed-effect regression for longitudinal count variables where the random effect is modeled with a Dirichlet process mixture. As our inference is in the Bayesian framework, we adopt a meta-analytic approach to develop an informative prior based on previous clinical trials. This is particularly helpful at the early stages of trials when less data are available. Our enhanced method is illustrated with CEL data from 10 previous multiple sclerosis clinical trials. Our simulation study shows that our procedure estimates the CPI more accurately than parametric alternatives when the patient-specific random effect distribution is misspecified and that an informative prior improves the accuracy of the CPI estimates. Copyright © 2015 John Wiley & Sons, Ltd.
Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John
2018-07-01
In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.
Directory of Open Access Journals (Sweden)
Jun Diao
2014-11-01
Full Text Available Allometric models of internodes are an important component of Functional-Structural Plant Models (FSPMs, which represent the shape of internodes in tree architecture and help our understanding of resource allocation in organisms. Constant allometry is always assumed in these models. In this paper, multilevel nonlinear mixed-effect models were used to characterize the variability of internode allometry, describing the relationship between the last internode length and biomass of Pinus tabulaeformis Carr. trees within the GreenLab framework. We demonstrated that there is significant variability in allometric relationships at the tree and different-order branch levels, and the variability decreases among levels from trees to first-order branches and, subsequently, to second-order branches. The variability was partially explained by the random effects of site characteristics, stand age, density, and topological position of the internode. Tree- and branch-level-specific allometric models are recommended because they produce unbiased and accurate internode length estimates. The model and method developed in this study are useful for understanding and describing the structure and functioning of trees.
Krengel, Annette; Hauth, Jan; Taskinen, Marja-Riitta; Adiels, Martin; Jirstrand, Mats
2013-01-19
When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example
Directory of Open Access Journals (Sweden)
Erik Olofsen
2015-07-01
Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique
2012-01-01
Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285
International Nuclear Information System (INIS)
Capozzoli, Alfonso; Piscitelli, Marco Savino; Neri, Francesco; Grassi, Daniele; Serale, Gianluca
2016-01-01
Highlights: • 100 Healthcare Centres were analyzed to assess energy consumption reference values. • A novel robust methodology for energy benchmarking process was proposed. • A Linear Mixed Effect estimation Model was used to treat heterogeneous datasets. • A nondeterministic approach was adopted to consider the uncertainty in the process. • The methodology was developed to be upgradable and generalizable to other datasets. - Abstract: The current EU energy efficiency directive 2012/27/EU defines the existing building stocks as one of the most promising potential sector for achieving energy saving. Robust methodologies aimed to quantify the potential reduction of energy consumption for large building stocks need to be developed. To this purpose, a benchmarking analysis is necessary in order to support public planners in determining how well a building is performing, in setting credible targets for improving performance or in detecting abnormal energy consumption. In the present work, a novel methodology is proposed to perform a benchmarking analysis particularly suitable for heterogeneous samples of buildings. The methodology is based on the estimation of a statistical model for energy consumption – the Linear Mixed Effects Model –, so as to account for both the fixed effects shared by all individuals within a dataset and the random effects related to particular groups/classes of individuals in the population. The groups of individuals within the population have been classified by resorting to a supervised learning technique. Under this backdrop, a Monte Carlo simulation is worked out to compute the frequency distribution of annual energy consumption and identify a reference value for each group/class of buildings. The benchmarking analysis was tested for a case study of 100 out-patient Healthcare Centres in Northern Italy, finally resulting in 12 different frequency distributions for space and Domestic Hot Water heating energy consumption, one for
Directory of Open Access Journals (Sweden)
Backhans Mona
2012-11-01
Full Text Available Abstract Background Gender differences in mortality vary widely between countries and over time, but few studies have examined predictors of these variations, apart from smoking. The aim of this study is to investigate the link between gender policy and the gender gap in cause-specific mortality, adjusted for economic factors and health behaviours. Methods 22 OECD countries were followed 1973–2008 and the outcomes were gender gaps in external cause and circulatory disease mortality. A previously found country cluster solution was used, which includes indicators on taxes, parental leave, pensions, social insurances and social services in kind. Male breadwinner countries were made reference group and compared to earner-carer, compensatory breadwinner, and universal citizen countries. Specific policies were also analysed. Mixed effect models were used, where years were the level 1-units, and countries were the level 2-units. Results Both the earner-carer cluster (ns after adjustment for GDP and policies characteristic of that cluster are associated with smaller gender differences in external causes, particularly due to an association with increased female mortality. Cluster differences in the gender gap in circulatory disease mortality are the result of a larger relative decrease of male mortality in the compensatory breadwinner cluster and the earner-carer cluster. Policies characteristic of those clusters were however generally related to increased mortality. Conclusion Results for external cause mortality are in concordance with the hypothesis that women become more exposed to risks of accident and violence when they are economically more active. For circulatory disease mortality, results differ depending on approach – cluster or indicator. Whether cluster differences not explained by specific policies reflect other welfare policies or unrelated societal trends is an open question. Recommendations for further studies are made.
Visualizing Statistical Mix Effects and Simpson's Paradox.
Armstrong, Zan; Wattenberg, Martin
2014-12-01
We discuss how "mix effects" can surprise users of visualizations and potentially lead them to incorrect conclusions. This statistical issue (also known as "omitted variable bias" or, in extreme cases, as "Simpson's paradox") is widespread and can affect any visualization in which the quantity of interest is an aggregated value such as a weighted sum or average. Our first contribution is to document how mix effects can be a serious issue for visualizations, and we analyze how mix effects can cause problems in a variety of popular visualization techniques, from bar charts to treemaps. Our second contribution is a new technique, the "comet chart," that is meant to ameliorate some of these issues.
International Nuclear Information System (INIS)
Nordlund, Kai; Sand, Andrea E.; Granberg, Fredric; Zinkle, Steven J.; Stoller, Roger; Averback, Robert S.; Suzudo, Tomoaki; Malerba, Lorenzo; Banhart, Florian; Weber, William J.; Willaime, Francois; Dudarev, Sergei; Simeone, David
2015-01-01
Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Multi-scale Modelling of Fuels and Structural Materials for Nuclear Systems (WPMM) was established in 2008 to assess the scientific and engineering aspects of fuels and structural materials, aiming at evaluating multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation, and related topics. It also provides member countries with up-to-date information, shared data, models and expertise. The WPMM Expert Group on Primary Radiation Damage (PRD) was established in 2009 to determine the limitations of the NRT-dpa standard, in the light of both atomistic simulations and known experimental discrepancies, to revisit the NRT-dpa standard and to examine the possibility of proposing a new improved standard of primary damage characteristics. This report reviews the current understanding of primary radiation damage from neutrons, ions and electrons (excluding photons, atomic clusters and more exotic particles), with emphasis on the range of validity of the 'displacement per atom' (dpa) concept in all major classes of materials with the exception of organics. The report also introduces an 'athermal recombination-corrected dpa' (arc-dpa) relation that uses a relatively simple functional to address the well-known issue that 'displacement per atom' (dpa) overestimates damage production in metals under energetic displacement cascade conditions, as well as a 'replacements-per-atom' (rpa) equation, also using a relatively simple functional, that accounts for the fact that dpa is understood to severely underestimate actual atom relocation (ion beam mixing) in metals. (authors)
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Bilinear Mixed Effects Models for Dyadic Data
National Research Council Canada - National Science Library
Hoff, Peter D
2003-01-01
This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of third-order dependence patterns often present in social networks and other dyadic datasets...
Adrian Ioana; Tiberiu Socaciu
2013-01-01
The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...
Czech Academy of Sciences Publication Activity Database
Pekár, S.; Brabec, Marek
2016-01-01
Roč. 122, č. 8 (2016), s. 621-631 ISSN 0179-1613 Institutional support: RVO:67985807 Keywords : linear models * marginal model * mixed-effects model * random effects * regression models * statistical analysis Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.398, year: 2016
Mixed-effects and fMRI studies
DEFF Research Database (Denmark)
Friston, K.J; Stephan, K.E; Ellegaard Lund, Torben
2005-01-01
This note concerns mixed-effect (MFX) analyses in multisession functional magnetic resonance imaging (fMRI) studies. It clarifies the relationship between mixed-effect analyses and the two-stage 'summary statistics' procedure (Holmes, A.P., Friston, K.J., 1998. Generalisability, random effects...
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
Dart, Lyn; Vanbeber, Anne; Smith-Barbaro, Peggy; Costilla, Vanessa; Samuel, Charlotte; Terregino, Carol A.; Abali, Emine Ercikan; Dollinger, Beth; Baumgartner, Nicole; Kramer, Nicholas; Seelochan, Alex; Taher, Sabira; Deutchman, Mark; Evans, Meredith; Ellis, Robert B.; Oyola, Sonia; Maker-Clark, Geeta; Budnick, Isadore; Tran, David; DeValle, Nicole; Shepard, Rachel; Chow, Erika; Petrin, Christine; Razavi, Alexander; McGowan, Casey; Grant, Austin; Bird, Mackenzie; Carry, Connor; McGowan, Glynis; McCullough, Colleen; Berman, Casey M.; Dotson, Kerri; Sarris, Leah; Harlan, Timothy S.; Co-investigators, on behalf of the CHOP
2018-01-01
Background Cardiovascular disease (CVD) annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world's first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. Methods This analysis provides a novel integration of artificial intelligence-based machine learning (ML) with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. Results 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00–2.28, p < 0.001) and MedDiet adherence (OR 1.40, 95% CI 1.07–1.84, p = 0.015), while reducing trainees' soft drink consumption (OR 0.56, 95% CI 0.37–0.85, p = 0.007). Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally (p < 0.001). Discussion This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students' own diets. This study suggests that
Monlezun, Dominique J; Dart, Lyn; Vanbeber, Anne; Smith-Barbaro, Peggy; Costilla, Vanessa; Samuel, Charlotte; Terregino, Carol A; Abali, Emine Ercikan; Dollinger, Beth; Baumgartner, Nicole; Kramer, Nicholas; Seelochan, Alex; Taher, Sabira; Deutchman, Mark; Evans, Meredith; Ellis, Robert B; Oyola, Sonia; Maker-Clark, Geeta; Dreibelbis, Tomi; Budnick, Isadore; Tran, David; DeValle, Nicole; Shepard, Rachel; Chow, Erika; Petrin, Christine; Razavi, Alexander; McGowan, Casey; Grant, Austin; Bird, Mackenzie; Carry, Connor; McGowan, Glynis; McCullough, Colleen; Berman, Casey M; Dotson, Kerri; Niu, Tianhua; Sarris, Leah; Harlan, Timothy S; Co-Investigators, On Behalf Of The Chop
2018-01-01
Cardiovascular disease (CVD) annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world's first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. This analysis provides a novel integration of artificial intelligence-based machine learning (ML) with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00-2.28, p < 0.001) and MedDiet adherence (OR 1.40, 95% CI 1.07-1.84, p = 0.015), while reducing trainees' soft drink consumption (OR 0.56, 95% CI 0.37-0.85, p = 0.007). Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally ( p < 0.001). This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students' own diets. This study suggests that the public health and medical sectors can
Directory of Open Access Journals (Sweden)
Dominique J. Monlezun
2018-01-01
Full Text Available Background. Cardiovascular disease (CVD annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world’s first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. Methods. This analysis provides a novel integration of artificial intelligence-based machine learning (ML with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. Results. 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00–2.28, p<0.001 and MedDiet adherence (OR 1.40, 95% CI 1.07–1.84, p=0.015, while reducing trainees’ soft drink consumption (OR 0.56, 95% CI 0.37–0.85, p=0.007. Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally (p<0.001. Discussion. This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students’ own diets. This
DEFF Research Database (Denmark)
Ozturk, I.; Ottosen, C.O.; Ritz, Christian
2011-01-01
conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 µmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi......Photosynthetic response to light was measured on the leaves of two cultivars of Rosa hybrida L. (Escimo and Mercedes) in the greenhouse to obtain light-response curves and their parameters. Th e aim was to use a model to simulate leaf photosynthetic carbon gain with respect to environmental...
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-01-01
Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680
A large mixing effect on eta,eta' and iota
International Nuclear Information System (INIS)
Kawai, E.
1983-01-01
We quantitatively investigate a possible large mixing effect on eta(549), eta'(958) and iota(1440) in a phenomenological way, taking both SU(3) symmetry breaking and gluon intervention into due account. (orig.)
The mixing effects for real gases and their mixtures
Gong, M. Q.; Luo, E. C.; Wu, J. F.
2004-10-01
The definitions of the adiabatic and isothermal mixing effects in the mixing processes of real gases were presented in this paper. Eight substances with boiling-point temperatures from cryogenic temperature to the ambient temperature were selected from the interest of low temperature refrigeration to study their binary and multicomponent mixing effects. Detailed analyses were made on the parameters of the mixing process to know their influences on mixing effects. Those parameters include the temperatures, pressures, and mole fraction ratios of pure substances before mixing. The results show that the maximum temperature variation occurs at the saturation state of each component in the mixing process. Those components with higher boiling-point temperatures have higher isothermal mixing effects. The maximum temperature variation which is defined as the adiabatic mixing effect can even reach up to 50 K, and the isothermal mixing effect can reach about 20 kJ/mol. The possible applications of the mixing cooling effect in both open cycle and closed cycle refrigeration systems were also discussed.
Developmental lead exposure has mixed effects on butterfly cognitive processes.
Philips, Kinsey H; Kobiela, Megan E; Snell-Rood, Emilie C
2017-01-01
While the effects of lead pollution have been well studied in vertebrates, it is unclear to what extent lead may negatively affect insect cognition. Lead pollution in soils can elevate lead in plant tissues, suggesting it could negatively affect neural development of insect herbivores. We used the cabbage white butterfly (Pieris rapae) as a model system to study the effect of lead pollution on insect cognitive processes, which play an important role in how insects locate and handle resources. Cabbage white butterfly larvae were reared on a 4-ppm lead diet, a concentration representative of vegetation in polluted sites; we measured eye size and performance on a foraging assay in adults. Relative to controls, lead-reared butterflies did not differ in time or ability to search for a food reward associated with a less preferred color. Indeed, lead-treated butterflies were more likely to participate in the behavioral assay itself. Lead exposure did not negatively affect survival or body size, and it actually sped up development time. The effects of lead on relative eye size varied with sex: lead tended to reduce eye size in males, but increase eye size in females. These results suggest that low levels of lead pollution may have mixed effects on butterfly vision, but only minimal impacts on performance in foraging tasks, although follow-up work is needed to test whether this result is specific to cabbage whites, which are often associated with disturbed areas.
DEFF Research Database (Denmark)
Öztürk, I.; Ottosen, C.O.; Ritz, C.
2011-01-01
Photosynthetic response to light was measured on the leaves of two cultivars of Rosa hybrida L. (Escimo and Mercedes) in the greenhouse to obtain light-response curves and their parameters. Th e aim was to use a model to simulate leaf photosynthetic carbon gain with respect to environmental condi...
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY
International Nuclear Information System (INIS)
Donard, A.; Pointurier, F.; Pecheyran, C.
2015-01-01
Analysis of ''environmental samples'', which consists in dust collected with cotton clothes wiped by inspectors on surfaces inside declared nuclear facilities, is a key tool for safeguards. Although two methods (fission tracks-TIMS and SIMS) are already used routinely to determine the isotopic composition of uranium particles, the laser ablationinductively coupled plasma mass spectrometry (LA-ICP-MS) coupling has been proven to be an interesting option thanks to its rapidity, high sensitivity and high signal/noise ratio. At CEA and UPPA, feasibility of particle analysis using a nanosecond LA device and a quadrupole ICP-MS has been demonstrated. However, despite the obvious potential of LA-ICP-MS for particle analysis, the effect of many phenomena which may bias isotope ratio measurements or lead to false detections must be investigated. Actually, environmental samples contain many types of non-uranium particles (organic debris, iron oxides, etc.) that can form molecular interferences and induce the risk of isotopic measurement bias, especially for minor isotopes (234U, 236U). The influence of these polyatomic interferences on the measurements will be discussed. Moreover, different uranium isotopic compositions can be found in the same sample. Therefore, risks of memory effect and of particle-toparticle cross-contamination by the deposition of ablation debris around the crater have also been investigated. This study has been conducted by using a femtosecond laser ablation device coupled to a high sensitivity sector field ICP-MS. Particles were fixed onto the discs with collodion and were located thanks to their fission tracks so that micrometric particles can be analyzed separately. All uranium isotope ratios were measured. Results are compared with the ones obtained with the fission tracks-TIMS technique on other deposition discs from the same sample. Performance of the method in terms of accuracy, precision, and detection limits are estimated
Two-level mixed modeling of longitudinal pedigree data for genetic association analysis
DEFF Research Database (Denmark)
Tan, Q.
2013-01-01
of follow-up. Approaches have been proposed to integrate kinship correlation into the mixed effect models to explicitly model the genetic relationship which have been proven as an efficient way for dealing with sample clustering in pedigree data. Although useful for adjusting relatedness in the mixed...... assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees which could affect statistical assessment of the genetic effects on both the mean level of the phenotype and its rate of change over the time...
Vibrometer based on a self-mixing effect interferometer
International Nuclear Information System (INIS)
Marti-Lopez, Luis; Gonzalez-Penna, R.; Martinez-Celorio, R. A.
2009-01-01
We outline the basic principles of the self-mixing effect and present the design and construction of an interferometer based on this phenomenon. It differs from the previously reported in the literature by the use of two photodetectors, located at different arms of the interferometer. This feature allows widening the arsenal of strategies for the digital processing of the signal. The interferometer is used as vibrometer for the characterization of professional loudspeakers. Experimental results are presented as an illustration. (Author)
Marketing mix effects on private labels brand equity
Abril, Carmen; Rodriguez-Cánovas, Belén
2017-01-01
The present study explores some marketing mix effects on private labels brand equity creation. The research aims to study the effect of some elements under retailer's direct control such as in-store communications, in-store promotions and distribution intensity as well as other general marketing mix levers such as advertising, perceived price, and monetary promotions. The results indicate that the most efficient marketing mix tools for private label brand equity creation are private labels in...
Energy Technology Data Exchange (ETDEWEB)
Lee, S.
2011-05-05
The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and
Operations and Modeling Analysis
Ebeling, Charles
2005-01-01
The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.
Where are the BB-bar mixing effects observable in the UPSILON region
International Nuclear Information System (INIS)
Ono, S.; Toernqvist, N.A.; Lee-Franzini, J.; Sanda, A.I.
1985-01-01
We estimate the B/sub d/-B-bar/sub d/ and B/sub s/-B-bar/sub s/ mixing effects using computed production cross sections for B/sub q/B/sub q/, B/sub q/B/sub q/(+c.c., B/sub q/(B/sub q/( (q = d,s) in the region from UPSILON(4S) to UPSILON(7S). It is shown that the mixing signal of same-sign leptons will peak at UPSILON(5S) under the assumption of standard model estimates, and if the total integrated luminosity of 1000 pb -1 is achieved, it will be observable
International Nuclear Information System (INIS)
Tran, H.; Flaud, P.-M.; Fouchet, T.; Gabard, T.; Hartmann, J.-M.
2006-01-01
The absorption shapes of the ν 2 , ν 3 and ν 4 infrared bands of CH 4 perturbed by H 2 in large ranges of pressure and temperature have been measured in the laboratory. In order to model these spectra, the theoretical approach accounting for line-mixing effects proposed for CH 4 -N 2 and CH 4 -air and successfully tested in the companion paper (I), is used. As before, state-to-state rotational rates are used together with some empirical parameters that are deduced from a fit of a single room temperature spectrum of the ν 3 band at about 50 atm. The comparisons between measured and calculated spectra in the ν 3 and ν 4 regions under a vast variety of conditions (9-300 atm, 80-300 K) then demonstrate the quality and consistency of the proposed model. In the case of the ν 2 band, which is of E symmetry, specific parameters, different from those adapted to the ν 3 and ν 4 transitions of F 2 symmetry, are used for proper modeling of the spectral shape. Furthermore, as shown previously, a broad absorption feature grows underneath the ν 2 band with increasing H 2 density. The latter, for which an empirical model is proposed, is attributed to a collision-induced absorption (CIA) process in methane. From the developed models, a database and associated software are built for the updating of planetary atmospheres radiative transfer codes. The quality of these tools is then further demonstrated using emission measurements of the Jovian and Saturnian atmospheres in the ν 4 region (7-10 μm) recorded by the Short Wave Spectrometer of the Infrared Space Observatory and the Composite Infrared Spectrometer on-board Cassini. Comparisons between measured radiances and predictions confirm the failure of the purely Lorentzian approach and the quality of the proposed line-mixing model. Furthermore, it is shown that the methane CIA contribution has a significant influence on the planetary emission beyond 1400 cm -1
Boulet, C.; Ma, Q.
2016-01-01
Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.
2014-01-01
This study developed a new snow model and a database which warehouses geometric, weather and traffic : data on New Jersey highways. The complexity of the model development lies in considering variable road : width, different spreading/plowing pattern...
Survival analysis models and applications
Liu, Xian
2012-01-01
Survival analysis concerns sequential occurrences of events governed by probabilistic laws. Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin
International Nuclear Information System (INIS)
Malik, S; Bloom, K; Shipsey, I; Cavanaugh, R; Klima, B; Chan, Kai-Feng; D'Hondt, J; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T
2014-01-01
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.
Energy Technology Data Exchange (ETDEWEB)
Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY
2014-01-01
To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.
A multivariate nonlinear mixed effects method for analyzing energy partitioning in growing pigs
DEFF Research Database (Denmark)
Strathe, Anders Bjerring; Danfær, Allan Christian; Chwalibog, André
2010-01-01
to the multivariate nonlinear regression model because the MNLME method accounted for correlated errors associated with PD and LD measurements and could also include the random effect of animal. It is recommended that multivariate models used to quantify energy metabolism in growing pigs should account for animal......Simultaneous equations have become increasingly popular for describing the effects of nutrition on the utilization of ME for protein (PD) and lipid deposition (LD) in animals. The study developed a multivariate nonlinear mixed effects (MNLME) framework and compared it with an alternative method...... for estimating parameters in simultaneous equations that described energy metabolism in growing pigs, and then proposed new PD and LD equations. The general statistical framework was implemented in the NLMIXED procedure in SAS. Alternative PD and LD equations were also developed, which assumed...
ORGANISATIONAL CULTURE ANALYSIS MODEL
Mihaela Simona Maracine
2012-01-01
The studies and researches undertaken have demonstrated the importance of studying organisational culture because of the practical valences it presents and because it contributes to increasing the organisation’s performance. The analysis of the organisational culture’s dimensions allows observing human behaviour within the organisation and highlighting reality, identifying the strengths and also the weaknesses which have an impact on its functionality and development. In this paper, we try to...
International Nuclear Information System (INIS)
Yang, J.M.; Li, C.S.
1996-01-01
Taking into account the mixing effects between left- and right-handed top squarks, we calculate the genuine supersymmetric electroweak correction to top-quark production at the Fermilab Tevatron in the minimal supersymmetric model. The analytic expressions of the corrections to both the parton level cross section and the total hadronic cross section are presented. Some numerical examples are also given to show the size of the corrections. copyright 1996 The American Physical Society
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
International Nuclear Information System (INIS)
Clinton Lum
2002-01-01
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4
Intercity Travel Demand Analysis Model
Ming Lu; Hai Zhu; Xia Luo; Lei Lei
2014-01-01
It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model...
Uncertainty analysis of environmental models
International Nuclear Information System (INIS)
Monte, L.
1990-01-01
In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition
Multiscale Signal Analysis and Modeling
Zayed, Ahmed
2013-01-01
Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...
Multivariate analysis: models and method
International Nuclear Information System (INIS)
Sanz Perucha, J.
1990-01-01
Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis
Domain specific modeling and analysis
Jacob, Joost Ferdinand
2008-01-01
It is desirable to model software systems in such a way that analysis of the systems, and tool development for such analysis, is readily possible and feasible in the context of large scientific research projects. This thesis emphasizes the methodology that serves as a basis for such developments.
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
Model Based Analysis of Ethnic Differences in Type 2 Diabetes
DEFF Research Database (Denmark)
Møller, Jonas Bech
the applicability of stochastic differential equations (SDEs) and non-linear mixed effects (NLME) models for such an assessment. One way to perform such an investigation is to characterise the pathophysiology of the two groups at different stages of disease progression. For T2D this involves a characterisation...... equations (SDEs) or another improved description of residuals. For characterising disease progression in Caucasian and Japanese, established models that include parameters for insulin sensitivity and beta-cell function were implemented in a non-linear mixed-effects setting with ODEs. Based on the ACF......-cell function, measured by simple insulin based measures,could be explained by difference in body size (BMI). This was supported by Forest plots of covariate effects obtained from population models, in general indicating that race had no clinical relevant effect on either the insulin sensitivit yor the beta...
Reliability analysis and operator modelling
International Nuclear Information System (INIS)
Hollnagel, Erik
1996-01-01
The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed
Intercity Travel Demand Analysis Model
Directory of Open Access Journals (Sweden)
Ming Lu
2014-01-01
Full Text Available It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model of main mode choice and access mode choice. At last, an integrated multilevel nested logit model structure system was built. The model system includes trip generation, destination choice, and mode-route choice based on multinomial logit model, and it achieved linkage and feedback of each part through logsum variable. This model was applied in Shenzhen intercity railway passenger demand forecast in 2010 as a case study. As a result, the forecast results were consistent with the actuality. The model's correctness and feasibility were verified.
Bayesian analysis of CCDM models
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Bayesian analysis of CCDM models
Energy Technology Data Exchange (ETDEWEB)
Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Energy-Water Modeling and Analysis | Energy Analysis | NREL
Generation (ReEDS Model Analysis) U.S. Energy Sector Vulnerabilities to Climate Change and Extreme Weather Modeling and Analysis Energy-Water Modeling and Analysis NREL's energy-water modeling and analysis vulnerabilities from various factors, including water. Example Projects Renewable Electricity Futures Study
Conformational analysis of lignin models
International Nuclear Information System (INIS)
Santos, Helio F. dos
2001-01-01
The conformational equilibrium for two 5,5' biphenyl lignin models have been analyzed using a quantum mechanical semiempirical method. The gas phase and solution structures are discussed based on the NMR and X-ray experimental data. The results obtained showed that the observed conformations are solvent-dependent, being the geometries and the thermodynamic properties correlated with the experimental information. This study shows how a systematic theoretical conformational analysis can help to understand chemical processes at a molecular level. (author)
Distribution system modeling and analysis
Kersting, William H
2001-01-01
For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...
Ventilation Model and Analysis Report
International Nuclear Information System (INIS)
Chipman, V.
2003-01-01
This model and analysis report develops, validates, and implements a conceptual model for heat transfer in and around a ventilated emplacement drift. This conceptual model includes thermal radiation between the waste package and the drift wall, convection from the waste package and drift wall surfaces into the flowing air, and conduction in the surrounding host rock. These heat transfer processes are coupled and vary both temporally and spatially, so numerical and analytical methods are used to implement the mathematical equations which describe the conceptual model. These numerical and analytical methods predict the transient response of the system, at the drift scale, in terms of spatially varying temperatures and ventilation efficiencies. The ventilation efficiency describes the effectiveness of the ventilation process in removing radionuclide decay heat from the drift environment. An alternative conceptual model is also developed which evaluates the influence of water and water vapor mass transport on the ventilation efficiency. These effects are described using analytical methods which bound the contribution of latent heat to the system, quantify the effects of varying degrees of host rock saturation (and hence host rock thermal conductivity) on the ventilation efficiency, and evaluate the effects of vapor and enhanced vapor diffusion on the host rock thermal conductivity
ANALYSIS MODEL FOR INVENTORY MANAGEMENT
Directory of Open Access Journals (Sweden)
CAMELIA BURJA
2010-01-01
Full Text Available The inventory represents an essential component for the assets of the enterprise and the economic analysis gives them special importance because their accurate management determines the achievement of the activity object and the financial results. The efficient management of inventory requires ensuring an optimum level for them, which will guarantee the normal functioning of the activity with minimum inventory expenses and funds which are immobilised. The paper presents an analysis model for inventory management based on their rotation speed and the correlation with the sales volume illustrated in an adequate study. The highlighting of the influence factors on the efficient inventory management ensures the useful information needed to justify managerial decisions, which will lead to a balancedfinancial position and to increased company performance.
Simplified model for DNB analysis
International Nuclear Information System (INIS)
Silva Filho, E.
1979-08-01
In a pressurized water nuclear reactor (PWR), the power of operation is restricted by the possibility of the occurrence of the departure from nucleate boiling called DNB (Departure from Nucleate Boiling) in the hottest channel of the core. The present work proposes a simplified model that analyses the thermal-hydraulic conditions of the coolant in the hottest channel of PWRs with the objective to evaluate BNB in this channel. For this the coupling between the hot channel and typical nominal channels assumed imposing the existence of a cross flow between these channels in a way that a uniforme pressure axial distribution results along the channels. The model is applied for Angra-I reactor and the results are compared with those of Final Safety Analysis Report (FSAR) obtained by Westinghouse through the THINC program, beeing considered satisfactory (Author) [pt
Simulation of mixing effects in a VVER-1000 reactor
International Nuclear Information System (INIS)
Ulrich Bieder; Gauthier Fauchet; Sylvie Betin; Nikola Kolev; Dimitar Popov
2005-01-01
a tetrahedral mesh with more than 15 million control volumes has been created. In this model, especially the geometrical details of the downcomer and the lower plenum (including the 165 mixing columns) have been taken into account in the simulation. The calculation of Trio-U has correctly reproduced the measured rotation of the flow when real CAD plant data where used. This is also true for the comparison of cold leg to assembly mixing coefficients. Using the conception data, the calculated swirl was significantly underestimated. Due to this result, it is now possible to improve the lower plenum flow mixing matrices which are usually used in system codes. (authors)
Geologic Framework Model Analysis Model Report
Energy Technology Data Exchange (ETDEWEB)
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Geologic Framework Model Analysis Model Report
International Nuclear Information System (INIS)
Clayton, R.
2000-01-01
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and
A Bayesian hierarchical model for demand curve analysis.
Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T
2018-07-01
Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.
Arc modeling for welding analysis
International Nuclear Information System (INIS)
Glickstein, S.S.
1978-04-01
A one-dimensional model of the welding arc that considers heat generation by the Joule effect and heat losses by radiation and conduction has been used to study the effects of various gases and gas mixtures currently employed for welding applications. Minor additions of low ionization potential impurities to these gases are shown to significantly perturb the electrical properties of the parent gas causing gross changes in the radial temperature distribution of the arc discharge. Such changes are reflected in the current density distribution and ultimately in the input energy distribution to the weldment. The result is observed as a variation in weld penetration. Recently published experiments and analyses of welding arcs are also evaluated and shown to contain erroneous data and results. Contrary to previous beliefs, the inclusion of a radiation loss term in the basic energy balance equation is important and cannot be considered as negligible in an argon arc at temperatures as low as 10,000 0 K. The one-dimensional analysis of the welding arc as well as the evaluation of these earlier published reports helps to explain the effects of various gases used for welding, improves our understanding of the physics of the welding arc, and provides a stepping stone for a more elaborate model which can be applied to help optimize welding parameters
Model Performance Evaluation and Scenario Analysis (MPESA)
Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)
Restoration of Tidal Flow to Impounded Salt Marsh Exerts Mixed Effect on Leaf Litter Decomposition
Henry, B. A.; Schade, J. D.; Foreman, K.
2015-12-01
Salt marsh impoundments (e.g. roads, levees) disconnect marshes from ocean tides, which impairs ecosystem services and often promotes invasive species. Numerous restoration projects now focus on removing impoundments. Leaf litter decomposition is a central process in salt marsh carbon and nutrient cycles, and this study investigated the extent to which marsh restoration alters litter decomposition rates. We considered three environmental factors that can potentially change during restoration: salinity, tidal regime, and dominant plant species. A one-month field experiment (Cape Cod, MA) measured decay of litter bags in impounded, restored, and natural marshes under ambient conditions. A two-week lab experiment measured litter decay in controlled incubations under experimental treatments for salinity (1ppt and 30 ppt), tidal regime (inundated and 12 hr wet-dry cycles), and plant species (native Spartina alterniflora and invasive Phragmites australis). S. alterniflora decomposed faster in situ than P. australis (14±1.0% mass loss versus 0.74±0.69%). Corroborating this difference in decomposition, S. alterniflora supported greater microbial respiration during lab incubation, measured as CO2 flux from leaf litter and biological oxygen demand of water containing leached organic matter (OM). However, nutrient analysis of plant tissue and leached OM show P. australis released more nitrogen than S. alterniflora. Low salinity treatments in both lab and field experiments decayed more rapidly than high salinity treatments, suggesting that salinity inhibited microbial activity. Manipulation of inundation regime did not affect decomposition. These findings suggest the reintroduction of tidal flow to an impounded salt marsh can have mixed effects; recolonization by the native cordgrass could supply labile OM to sediment and slow carbon sequestration, while an increase in salinity might inhibit decomposition and accelerate sequestration.
Directory of Open Access Journals (Sweden)
Hugo Hesser
2015-05-01
Full Text Available Growth models (also known as linear mixed effects models, multilevel models, and random coefficients models have the capability of studying change at the group as well as the individual level. In addition, these methods have documented advantages over traditional data analytic approaches in the analysis of repeated-measures data. These advantages include, but are not limited to, the ability to incorporate time-varying predictors, handle dependence among repeated observations in a very flexible manner, and to provide accurate estimates with missing data under fairly unrestrictive missing data assumptions. The flexibility of the growth curve modeling approach to the analysis of change makes it the preferred choice in the evaluation of direct, indirect and moderated intervention effects. Although offering many benefits, growth models present challenges in terms of design, analysis and reporting of results. This paper provides a nontechnical overview of growth models in the analysis of change in randomized experiments and advocates for their use in the field of internet interventions. Practical recommendations for design, analysis and reporting of results from growth models are provided.
Hypersonic - Model Analysis as a Service
DEFF Research Database (Denmark)
Acretoaie, Vlad; Störrle, Harald
2014-01-01
Hypersonic is a Cloud-based tool that proposes a new approach to the deployment of model analysis facilities. It is implemented as a RESTful Web service API o_ering analysis features such as model clone detection. This approach allows the migration of resource intensive analysis algorithms from...
Formal Analysis of Domain Models
National Research Council Canada - National Science Library
Bharadwaj, Ramesh
2002-01-01
Recently, there has been a great deal of interest in the application of formal methods, in particular, precise formal notations and automatic analysis tools for the creation and analysis of requirements specifications (i.e...
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
DEFF Research Database (Denmark)
Juul, Rasmus V; Knøsgaard, Katrine R; Olesen, Anne E
2016-01-01
Joint analysis of pain intensity and opioid consumption is encouraged in trials of postoperative pain. However, previous approaches have not appropriately addressed the complexity of their interrelation in time. In this study, we applied a non-linear mixed effects model to simultaneously study pain...... intensity and opioid consumption in a 4-h postoperative period for 44 patients undergoing percutaneous kidney stone surgery. Analysis was based on 748 Numerical Rating Scale (NRS) scores of pain intensity and 51 observed morphine and oxycodone dosing events. A joint model was developed to describe...... the recurrent pattern of four key phases determining the development of pain intensity and opioid consumption in time; (A) Distribution of pain intensity scores which followed a truncated Poisson distribution with time-dependent mean score ranging from 0.93 to 2.45; (B) Probability of transition to threshold...
Water loss in horticultural products. Modelling, data analysis and theoretical considerations
Tijskens, L.M.M.; Jacob, S.; Schouten, R.E.; Fernandez-Trujillo, J.P.; Dos-Santos, N.; Vangdal, E.; Pagan, E.; Perez Pastor, A.
2010-01-01
The water loss of individual fruit (melon, plum and mandarin) was analysed using the traditional diffusion based approach and a kinetic approach. Applying simple non linear regression, both approaches are the same, resulting in a quite acceptable analysis. However, by applying mixed effects non
Statistical Modelling of Wind Proles - Data Analysis and Modelling
DEFF Research Database (Denmark)
Jónsson, Tryggvi; Pinson, Pierre
The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....
Model Checking as Static Analysis
DEFF Research Database (Denmark)
Zhang, Fuyuan
fairness problems can be encoded into ALFP as well. To deal with multi-valued model checking problems, we have proposed multivalued ALFP. A Moore Family result for multi-valued ALFP is also established, which ensures the existence and uniqueness of the least model. When the truth values in multi-valued...... of states satisfying a CTL formula can be characterized as the least model of ALFP clauses specifying this CTL formula. The existence of the least model of ALFP clauses is ensured by the Moore Family property of ALFP. Then, we take fairness assumptions in CTL into consideration and have shown that CTL...... ALFP constitute a nite distributive complete lattice, multi-valued ALFP can be reduced to two-valued ALFP. This result enables to implement a solver for multi-valued ALFP by reusing existing solvers for twovalued ALFP. Our ALFP-based technique developed for the two-valued CTL naturally generalizes...
CMS Data Analysis School Model
Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T
2014-01-01
To impart hands-on training in physics analysis, CMS experiment initiated the Â concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorialsÂ and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained inÂ six CMSDAS around the globe , CMS is trying toÂ Â engage the collaboration discovery potential and maximize the physics output. As a bigger goal,Â CMS is striving to nurture and increase engagement of the myriad talentsÂ of CMS, in the development of physics, service, upgrade, education ofÂ those new to CMS and the caree...
Adaptive streaming applications : analysis and implementation models
Zhai, Jiali Teddy
2015-01-01
This thesis presents a highly automated design framework, called DaedalusRT, and several novel techniques. As the foundation of the DaedalusRT design framework, two types of dataflow Models-of-Computation (MoC) are used, one as timing analysis model and another one as the implementation model. The
Representing Uncertainty on Model Analysis Plots
Smith, Trevor I.
2016-01-01
Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model.…
Analysis of radiology business models.
Enzmann, Dieter R; Schomer, Donald F
2013-03-01
As health care moves to value orientation, radiology's traditional business model faces challenges to adapt. The authors describe a strategic value framework that radiology practices can use to best position themselves in their environments. This simplified construct encourages practices to define their dominant value propositions. There are 3 main value propositions that form a conceptual triangle, whose vertices represent the low-cost provider, the product leader, and the customer intimacy models. Each vertex has been a valid market position, but each demands specific capabilities and trade-offs. The underlying concepts help practices select value propositions they can successfully deliver in their competitive environments. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Combustion instability modeling and analysis
Energy Technology Data Exchange (ETDEWEB)
Santoro, R.J.; Yang, V.; Santavicca, D.A. [Pennsylvania State Univ., University Park, PA (United States); Sheppard, E.J. [Tuskeggee Univ., Tuskegee, AL (United States). Dept. of Aerospace Engineering
1995-12-31
It is well known that the two key elements for achieving low emissions and high performance in a gas turbine combustor are to simultaneously establish (1) a lean combustion zone for maintaining low NO{sub x} emissions and (2) rapid mixing for good ignition and flame stability. However, these requirements, when coupled with the short combustor lengths used to limit the residence time for NO formation typical of advanced gas turbine combustors, can lead to problems regarding unburned hydrocarbons (UHC) and carbon monoxide (CO) emissions, as well as the occurrence of combustion instabilities. The concurrent development of suitable analytical and numerical models that are validated with experimental studies is important for achieving this objective. A major benefit of the present research will be to provide for the first time an experimentally verified model of emissions and performance of gas turbine combustors. The present study represents a coordinated effort between industry, government and academia to investigate gas turbine combustion dynamics. Specific study areas include development of advanced diagnostics, definition of controlling phenomena, advancement of analytical and numerical modeling capabilities, and assessment of the current status of our ability to apply these tools to practical gas turbine combustors. The present work involves four tasks which address, respectively, (1) the development of a fiber-optic probe for fuel-air ratio measurements, (2) the study of combustion instability using laser-based diagnostics in a high pressure, high temperature flow reactor, (3) the development of analytical and numerical modeling capabilities for describing combustion instability which will be validated against experimental data, and (4) the preparation of a literature survey and establishment of a data base on practical experience with combustion instability.
Hierarchical modeling and analysis for spatial data
Banerjee, Sudipto; Gelfand, Alan E
2003-01-01
Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat
Representing uncertainty on model analysis plots
Directory of Open Access Journals (Sweden)
Trevor I. Smith
2016-09-01
Full Text Available Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao’s original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.
Combustion instability modeling and analysis
Energy Technology Data Exchange (ETDEWEB)
Santoro, R.J.; Yang, V.; Santavicca, D.A. [Pennsylvania State Univ., University Park, PA (United States)] [and others
1995-10-01
It is well known that the two key elements for achieving low emissions and high performance in a gas turbine combustor are to simultaneously establish (1) a lean combustion zone for maintaining low NO{sub x} emissions and (2) rapid mixing for good ignition and flame stability. However, these requirements, when coupled with the short combustor lengths used to limit the residence time for NO formation typical of advanced gas turbine combustors, can lead to problems regarding unburned hydrocarbons (UHC) and carbon monoxide (CO) emissions, as well as the occurrence of combustion instabilities. Clearly, the key to successful gas turbine development is based on understanding the effects of geometry and operating conditions on combustion instability, emissions (including UHC, CO and NO{sub x}) and performance. The concurrent development of suitable analytical and numerical models that are validated with experimental studies is important for achieving this objective. A major benefit of the present research will be to provide for the first time an experimentally verified model of emissions and performance of gas turbine combustors.
Three-dimensional model analysis and processing
Yu, Faxin; Luo, Hao; Wang, Pinghui
2011-01-01
This book focuses on five hot research directions in 3D model analysis and processing in computer science: compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.
Two sustainable energy system analysis models
DEFF Research Database (Denmark)
Lund, Henrik; Goran Krajacic, Neven Duic; da Graca Carvalho, Maria
2005-01-01
This paper presents a comparative study of two energy system analysis models both designed with the purpose of analysing electricity systems with a substantial share of fluctuating renewable energy....
Automatic differentiation algorithms in model analysis
Huiskes, M.J.
2002-01-01
Title: Automatic differentiation algorithms in model analysis
Author: M.J. Huiskes
Date: 19 March, 2002
In this thesis automatic differentiation algorithms and derivative-based methods
Applied research in uncertainty modeling and analysis
Ayyub, Bilal
2005-01-01
Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...
Trajectory modeling of gestational weight: A functional principal component analysis approach.
Directory of Open Access Journals (Sweden)
Menglu Che
Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.
Credit Risk Evaluation : Modeling - Analysis - Management
Wehrspohn, Uwe
2002-01-01
An analysis and further development of the building blocks of modern credit risk management: -Definitions of default -Estimation of default probabilities -Exposures -Recovery Rates -Pricing -Concepts of portfolio dependence -Time horizons for risk calculations -Quantification of portfolio risk -Estimation of risk measures -Portfolio analysis and portfolio improvement -Evaluation and comparison of credit risk models -Analytic portfolio loss distributions The thesis contributes to the evaluatio...
Ignalina NPP Safety Analysis: Models and Results
International Nuclear Information System (INIS)
Uspuras, E.
1999-01-01
Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)
FAME, the flux analysis and modelling environment
Boele, J.; Olivier, B.G.; Teusink, B.
2012-01-01
Background: The creation and modification of genome-scale metabolic models is a task that requires specialized software tools. While these are available, subsequently running or visualizing a model often relies on disjoint code, which adds additional actions to the analysis routine and, in our
A Bayesian Nonparametric Meta-Analysis Model
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
Numerical modeling techniques for flood analysis
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Perturbation analysis of nonlinear matrix population models
Directory of Open Access Journals (Sweden)
Hal Caswell
2008-03-01
Full Text Available Perturbation analysis examines the response of a model to changes in its parameters. It is commonly applied to population growth rates calculated from linear models, but there has been no general approach to the analysis of nonlinear models. Nonlinearities in demographic models may arise due to density-dependence, frequency-dependence (in 2-sex models, feedback through the environment or the economy, and recruitment subsidy due to immigration, or from the scaling inherent in calculations of proportional population structure. This paper uses matrix calculus to derive the sensitivity and elasticity of equilibria, cycles, ratios (e.g. dependency ratios, age averages and variances, temporal averages and variances, life expectancies, and population growth rates, for both age-classified and stage-classified models. Examples are presented, applying the results to both human and non-human populations.
MSSV Modeling for Wolsong-1 Safety Analysis
Energy Technology Data Exchange (ETDEWEB)
Moon, Bok Ja; Choi, Chul Jin; Kim, Seoung Rae [KEPCO EandC, Daejeon (Korea, Republic of)
2010-10-15
The main steam safety valves (MSSVs) are installed on the main steam line to prevent the overpressurization of the system. MSSVs are held in closed position by spring force and the valves pop open by internal force when the main steam pressure increases to open set pressure. If the overpressure condition is relieved, the valves begin to close. For the safety analysis of anticipated accident condition, the safety systems are modeled conservatively to simulate the accident condition more severe. MSSVs are also modeled conservatively for the analysis of over-pressurization accidents. In this paper, the pressure transient is analyzed at over-pressurization condition to evaluate the conservatism for MSSV models
Simulation modeling and analysis with Arena
Altiok, Tayfur
2007-01-01
Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment. It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...
A Requirements Analysis Model Based on QFD
Institute of Scientific and Technical Information of China (English)
TANG Zhi-wei; Nelson K.H.Tang
2004-01-01
The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.
Comparative Analysis of Investment Decision Models
Directory of Open Access Journals (Sweden)
Ieva Kekytė
2017-06-01
Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Critical analysis of algebraic collective models
International Nuclear Information System (INIS)
Moshinsky, M.
1986-01-01
The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes
Model Selection in Data Analysis Competitions
DEFF Research Database (Denmark)
Wind, David Kofoed; Winther, Ole
2014-01-01
The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
FAME, the Flux Analysis and Modeling Environment
Directory of Open Access Journals (Sweden)
Boele Joost
2012-01-01
Full Text Available Abstract Background The creation and modification of genome-scale metabolic models is a task that requires specialized software tools. While these are available, subsequently running or visualizing a model often relies on disjoint code, which adds additional actions to the analysis routine and, in our experience, renders these applications suboptimal for routine use by (systems biologists. Results The Flux Analysis and Modeling Environment (FAME is the first web-based modeling tool that combines the tasks of creating, editing, running, and analyzing/visualizing stoichiometric models into a single program. Analysis results can be automatically superimposed on familiar KEGG-like maps. FAME is written in PHP and uses the Python-based PySCeS-CBM for its linear solving capabilities. It comes with a comprehensive manual and a quick-start tutorial, and can be accessed online at http://f-a-m-e.org/. Conclusions With FAME, we present the community with an open source, user-friendly, web-based "one stop shop" for stoichiometric modeling. We expect the application will be of substantial use to investigators and educators alike.
Global plastic models for computerized structural analysis
International Nuclear Information System (INIS)
Roche, R.L.; Hoffmann, A.
1977-01-01
In many types of structures, it is possible to use generalized stresses (like membrane forces, bending moment, torsion moment...) to define a yield surface for a part of the structure. Analysis can be achieved by using the HILL's principle and a hardening rule. The whole formulation is said 'Global Plastic Model'. Two different global models are used in the CEASEMT system for structural analysis, one for shell analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses chosen are the membrane forces and bending (including torsion) moments. There is only one yield condition for a normal to the middle surface and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is bending moments, torsional moment, hoop stress and tension stress. There is only a set of stresses for a cross section and no integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic function of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield functions used. Some examples of applications in structural analysis are added to the text
Modeling and Analysis of Space Based Transceivers
Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.
2007-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
Modeling and analysis of stochastic systems
Kulkarni, Vidyadhar G
2011-01-01
Based on the author's more than 25 years of teaching experience, Modeling and Analysis of Stochastic Systems, Second Edition covers the most important classes of stochastic processes used in the modeling of diverse systems, from supply chains and inventory systems to genetics and biological systems. For each class of stochastic process, the text includes its definition, characterization, applications, transient and limiting behavior, first passage times, and cost/reward models. Along with reorganizing the material, this edition revises and adds new exercises and examples. New to the second edi
Independent Component Analysis in Multimedia Modeling
DEFF Research Database (Denmark)
Larsen, Jan
2003-01-01
largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...
Review and analysis of biomass gasification models
DEFF Research Database (Denmark)
Puig Arnavat, Maria; Bruno, Joan Carles; Coronas, Alberto
2010-01-01
, and the design, simulation, optimisation and process analysis of gasifiers have been carried out. This paper presents and analyses several gasification models based on thermodynamic equilibrium, kinetics and artificial neural networks. The thermodynamic models are found to be a useful tool for preliminary...... comparison and for process studies on the influence of the most important fuel and process parameters. They have the advantage of being independent of gasifier design, but they cannot give highly accurate results for all cases. The kinetic-based models are computationally more intensive but give accurate...
Modeling, Analysis, and Optimization Issues for Large Space Structures
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Interactive Visual Analysis within Dynamic Ocean Models
Butkiewicz, T.
2012-12-01
The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.
Economic Modeling and Analysis of Educational Vouchers
Epple, Dennis; Romano, Richard
2012-01-01
The analysis of educational vouchers has evolved from market-based analogies to models that incorporate distinctive features of the educational environment. These distinctive features include peer effects, scope for private school pricing and admissions based on student characteristics, the linkage of household residential and school choices in…
Analysis of MUF data using arima models
International Nuclear Information System (INIS)
Downing, D.J.; Pike, D.H.; Morrison, G.W.
1978-01-01
An introduction to Box-Jenkins time series analysis is presented. It is shown how the models presented by Box-Jenkins can be applied to material unaccounted for (MUF) data to detect losses. For the constant loss case an optimal estimate of the loss is found and its probability of detection found
Power system stability modelling, analysis and control
Sallam, Abdelhay A
2015-01-01
This book provides a comprehensive treatment of the subject from both a physical and mathematical perspective and covers a range of topics including modelling, computation of load flow in the transmission grid, stability analysis under both steady-state and disturbed conditions, and appropriate controls to enhance stability.
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
Analysis hierarchical model for discrete event systems
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Conceptual models for waste tank mechanistic analysis
International Nuclear Information System (INIS)
Allemann, R.T.; Antoniak, Z.I.; Eyler, L.L.; Liljegren, L.M.; Roberts, J.S.
1992-02-01
Pacific Northwest Laboratory (PNL) is conducting a study for Westinghouse Hanford Company (Westinghouse Hanford), a contractor for the US Department of Energy (DOE). The purpose of the work is to study possible mechanisms and fluid dynamics contributing to the periodic release of gases from double-shell waste storage tanks at the Hanford Site in Richland, Washington. This interim report emphasizing the modeling work follows two other interim reports, Mechanistic Analysis of Double-Shell Tank Gas Release Progress Report -- November 1990 and Collection and Analysis of Existing Data for Waste Tank Mechanistic Analysis Progress Report -- December 1990, that emphasized data correlation and mechanisms. The approach in this study has been to assemble and compile data that are pertinent to the mechanisms, analyze the data, evaluate physical properties and parameters, evaluate hypothetical mechanisms, and develop mathematical models of mechanisms
ANALYSIS AND MODELING OF GENEVA MECHANISM
Directory of Open Access Journals (Sweden)
HARAGA Georgeta
2015-06-01
Full Text Available The paper presents some aspects theoretical and practical based on the finite element analysis and modelling of Geneva mechanism with four slots, using the CATIA graphic program. This type of mechanism is an example of intermittent gearing that translates a continuous rotation into an intermittent rotary motion. It consists of alternate periods of motion and rest without reversing direction. In this paper, some design parameters with specify a Geneva mechanism will be defined precisely such as number of driving cranks, number of slots, wheel diameter, pin diameter, etc. Finite element analysis (FEA can be used for creating a finite element model (preprocessing and visualizing the analysis results (postprocessing, and use other solvers for processing.
Formal Modeling and Analysis of Timed Systems
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Niebert, Peter
This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...
Ferrofluids: Modeling, numerical analysis, and scientific computation
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a
Parametric Analysis of Flexible Logic Control Model
Directory of Open Access Journals (Sweden)
Lihua Fu
2013-01-01
Full Text Available Based on deep analysis about the essential relation between two input variables of normal two-dimensional fuzzy controller, we used universal combinatorial operation model to describe the logic relationship and gave a flexible logic control method to realize the effective control for complex system. In practical control application, how to determine the general correlation coefficient of flexible logic control model is a problem for further studies. First, the conventional universal combinatorial operation model has been limited in the interval [0,1]. Consequently, this paper studies a kind of universal combinatorial operation model based on the interval [a,b]. And some important theorems are given and proved, which provide a foundation for the flexible logic control method. For dealing reasonably with the complex relations of every factor in complex system, a kind of universal combinatorial operation model with unequal weights is put forward. Then, this paper has carried out the parametric analysis of flexible logic control model. And some research results have been given, which have important directive to determine the values of the general correlation coefficients in practical control application.
Sensitivity analysis of a modified energy model
International Nuclear Information System (INIS)
Suganthi, L.; Jagadeesan, T.R.
1997-01-01
Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)
Guideliness for system modeling: fault tree [analysis
Energy Technology Data Exchange (ETDEWEB)
Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong
2004-07-01
This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.
Guideliness for system modeling: fault tree [analysis
International Nuclear Information System (INIS)
Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong
2004-07-01
This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard
Social phenomena from data analysis to models
Perra, Nicola
2015-01-01
This book focuses on the new possibilities and approaches to social modeling currently being made possible by an unprecedented variety of datasets generated by our interactions with modern technologies. This area has witnessed a veritable explosion of activity over the last few years, yielding many interesting and useful results. Our aim is to provide an overview of the state of the art in this area of research, merging an extremely heterogeneous array of datasets and models. Social Phenomena: From Data Analysis to Models is divided into two parts. Part I deals with modeling social behavior under normal conditions: How we live, travel, collaborate and interact with each other in our daily lives. Part II deals with societal behavior under exceptional conditions: Protests, armed insurgencies, terrorist attacks, and reactions to infectious diseases. This book offers an overview of one of the most fertile emerging fields bringing together practitioners from scientific communities as diverse as social sciences, p...
Directory of Open Access Journals (Sweden)
Litovchenko O.L.
2015-05-01
Full Text Available At present, biochemical mechanisms of mixed effects of electromagnetic radiation (EMR and cold on the body are not adequately studied, so this problem is urgent for modern medicine. Purpose of study. Establishing pathognomonic criteria and biochemical mechanisms of adverse effect of EMR on the organism of laboratory animals in conditions of cold stress. Materials and methods. The laboratory subacute experiment was carried out on mature white male rats of WAG line, weighing 190-220 g for 1 month. The animals were divided into 4 groups of 10 animals in each group. The first group was subjected to the isolated action of electromagnetic radiation (frequency 70 kHz, tension 600 V/m at a comfortable air temperature of 25 ± 2 ° C. The second group was subjected to the mixed action of EMR and low temperature 4 ± 2°C. The third group served as a control with regard to the first group, and the fourth group - with regard to the second, at air temperature of 25 ± 2°C. Expositions were carried out 5 times a week (for 4:00 every day. To identify changes in biochemical parameters studied during the experiments, blood sampling was performed at the stages of 5, 15, 30 days and urine sampling – at the stages of 15, 30 days in dynamics. Blood serum was used as biomaterial. It was determined the content of malondialdehyde (MDA, conjugated diene, content of SH-groups, superoxide dismutase, ceruloplasmin, cholesterol, high density lipoprotein, low density lipoprotein, very low density lipoprotein (VLDL, triglycerides, atherogenic index was determined, the level of urea, alkaline phosphatase, acid phosphatase, content of chlorides, calcium, magnesium, phosphorus, total protein, glucose, and catalase activity. Renal function was studied by the content of creatinine, cholinesterase, urea, uric acid, chlorides, potassium, sodium, calcium, phosphorus and glucose in urine. Results and discussion. The findings showed that the isolated action of EMR only led to a
Advances in statistical models for data analysis
Minerva, Tommaso; Vichi, Maurizio
2015-01-01
This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.
3D face modeling, analysis and recognition
Daoudi, Mohamed; Veltkamp, Remco
2013-01-01
3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s
International Space Station Model Correlation Analysis
Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael
2018-01-01
This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.
Cryogenic Fuel Tank Draining Analysis Model
Greer, Donald
1999-01-01
One of the technological challenges in designing advanced hypersonic aircraft and the next generation of spacecraft is developing reusable flight-weight cryogenic fuel tanks. As an aid in the design and analysis of these cryogenic tanks, a computational fluid dynamics (CFD) model has been developed specifically for the analysis of flow in a cryogenic fuel tank. This model employs the full set of Navier-Stokes equations, except that viscous dissipation is neglected in the energy equation. An explicit finite difference technique in two-dimensional generalized coordinates, approximated to second-order accuracy in both space and time is used. The stiffness resulting from the low Mach number is resolved by using artificial compressibility. The model simulates the transient, two-dimensional draining of a fuel tank cross section. To calculate the slosh wave dynamics the interface between the ullage gas and liquid fuel is modeled as a free surface. Then, experimental data for free convection inside a horizontal cylinder are compared with model results. Finally, cryogenic tank draining calculations are performed with three different wall heat fluxes to demonstrate the effect of wall heat flux on the internal tank flow field.
Proost, Johannes H.; Eleveld, Douglas J.
2006-01-01
Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with
Model Based Analysis of Insider Threats
DEFF Research Database (Denmark)
Chen, Taolue; Han, Tingting; Kammueller, Florian
2016-01-01
In order to detect malicious insider attacks it is important to model and analyse infrastructures and policies of organisations and the insiders acting within them. We extend formal approaches that allow modelling such scenarios by quantitative aspects to enable a precise analysis of security...... designs. Our framework enables evaluating the risks of an insider attack to happen quantitatively. The framework first identifies an insider's intention to perform an inside attack, using Bayesian networks, and in a second phase computes the probability of success for an inside attack by this actor, using...
Urban drainage models - making uncertainty analysis simple
DEFF Research Database (Denmark)
Vezzaro, Luca; Mikkelsen, Peter Steen; Deletic, Ana
2012-01-01
in each measured/observed datapoint; an issue which is commonly overlook in the uncertainty analysis of urban drainage models. This comparison allows the user to intuitively estimate the optimum number of simulations required to conduct uncertainty analyses. The output of the method includes parameter......There is increasing awareness about uncertainties in modelling of urban drainage systems and, as such, many new methods for uncertainty analyses have been developed. Despite this, all available methods have limitations which restrict their widespread application among practitioners. Here...
MATHEMATICAL RISK ANALYSIS: VIA NICHOLAS RISK MODEL AND BAYESIAN ANALYSIS
Directory of Open Access Journals (Sweden)
Anass BAYAGA
2010-07-01
Full Text Available The objective of this second part of a two-phased study was to explorethe predictive power of quantitative risk analysis (QRA method andprocess within Higher Education Institution (HEI. The method and process investigated the use impact analysis via Nicholas risk model and Bayesian analysis, with a sample of hundred (100 risk analysts in a historically black South African University in the greater Eastern Cape Province.The first findings supported and confirmed previous literature (KingIII report, 2009: Nicholas and Steyn, 2008: Stoney, 2007: COSA, 2004 that there was a direct relationship between risk factor, its likelihood and impact, certiris paribus. The second finding in relation to either controlling the likelihood or the impact of occurrence of risk (Nicholas risk model was that to have a brighter risk reward, it was important to control the likelihood ofoccurrence of risks as compared with its impact so to have a direct effect on entire University. On the Bayesian analysis, thus third finding, the impact of risk should be predicted along three aspects. These aspects included the human impact (decisions made, the property impact (students and infrastructural based and the business impact. Lastly, the study revealed that although in most business cases, where as business cycles considerably vary dependingon the industry and or the institution, this study revealed that, most impacts in HEI (University was within the period of one academic.The recommendation was that application of quantitative risk analysisshould be related to current legislative framework that affects HEI.
Energy Systems Modelling Research and Analysis
DEFF Research Database (Denmark)
Møller Andersen, Frits; Alberg Østergaard, Poul
2015-01-01
This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out b...... by 11 university and industry partners has improved the basis for decision-making within energy planning and energy scenario making by providing new and improved tools and methods for energy systems analyses.......This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out...
Exploiting partial knowledge for efficient model analysis
Macedo, Nuno; Cunha, Alcino; Pessoa, Eduardo José Dias
2017-01-01
The advancement of constraint solvers and model checkers has enabled the effective analysis of high-level formal specification languages. However, these typically handle a specification in an opaque manner, amalgamating all its constraints in a single monolithic verification task, which often proves to be a performance bottleneck. This paper addresses this issue by proposing a solving strategy that exploits user-provided partial knowledge, namely by assigning symbolic bounds to the problem’s ...
The Roy Adaptation Model and Content Analysis
Fawcett, Jacqueline
2006-01-01
The purpose of this paper is to explain how the Roy Adaptation Model can be used to guide a combined qualitative and quantitative content analysis of responses to open-ended interviews questions. Responses can be categorized as adaptive or ineffective within the physiological, self-concept, role function, and interdependence modes of adaptation and then tallied to yield an adaptation score. El objetivo del presente estudio consiste en explicar de qué manera se puede utilizar el Modelo de A...
Micromechatronics modeling, analysis, and design with Matlab
Giurgiutiu, Victor
2009-01-01
Focusing on recent developments in engineering science, enabling hardware, advanced technologies, and software, Micromechatronics: Modeling, Analysis, and Design with MATLAB®, Second Edition provides clear, comprehensive coverage of mechatronic and electromechanical systems. It applies cornerstone fundamentals to the design of electromechanical systems, covers emerging software and hardware, introduces the rigorous theory, examines the design of high-performance systems, and helps develop problem-solving skills. Along with more streamlined material, this edition adds many new sections to exist
3D space analysis of dental models
Chuah, Joon H.; Ong, Sim Heng; Kondo, Toshiaki; Foong, Kelvin W. C.; Yong, Than F.
2001-05-01
Space analysis is an important procedure by orthodontists to determine the amount of space available and required for teeth alignment during treatment planning. Traditional manual methods of space analysis are tedious and often inaccurate. Computer-based space analysis methods that work on 2D images have been reported. However, as the space problems in the dental arch exist in all three planes of space, a full 3D analysis of the problems is necessary. This paper describes a visualization and measurement system that analyses 3D images of dental plaster models. Algorithms were developed to determine dental arches. The system is able to record the depths of the Curve of Spee, and quantify space liabilities arising from a non-planar Curve of Spee, malalignment and overjet. Furthermore, the difference between total arch space available and the space required to arrange the teeth in ideal occlusion can be accurately computed. The system for 3D space analysis of the dental arch is an accurate, comprehensive, rapid and repeatable method of space analysis to facilitate proper orthodontic diagnosis and treatment planning.
Global plastic models for computerized structural analysis
International Nuclear Information System (INIS)
Roche, R.; Hoffmann, A.
1977-01-01
Two different global models are used in the CEASEMT system for structural analysis, one for the shells analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses choosed are the membrane forces Nsub(ij) and bending (including torsion) moments Msub(ij). There is only one yield condition for a normal (to the middle surface) and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is: bending moments, torsional moments, Hoop stress and tension stress. There is only a set of stresses for a cross section and non integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic fonction of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield fonctions used. Some examples of applications in structural analysis are added to the text [fr
Session 6: Dynamic Modeling and Systems Analysis
Csank, Jeffrey; Chapman, Jeffryes; May, Ryan
2013-01-01
These presentations cover some of the ongoing work in dynamic modeling and dynamic systems analysis. The first presentation discusses dynamic systems analysis and how to integrate dynamic performance information into the systems analysis. The ability to evaluate the dynamic performance of an engine design may allow tradeoffs between the dynamic performance and operability of a design resulting in a more efficient engine design. The second presentation discusses the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS). T-MATS is a Simulation system with a library containing the basic building blocks that can be used to create dynamic Thermodynamic Systems. Some of the key features include Turbo machinery components, such as turbines, compressors, etc., and basic control system blocks. T-MAT is written in the Matlab-Simulink environment and is open source software. The third presentation focuses on getting additional performance from the engine by allowing the limit regulators only to be active when a limit is danger of being violated. Typical aircraft engine control architecture is based on MINMAX scheme, which is designed to keep engine operating within prescribed mechanical/operational safety limits. Using a conditionally active min-max limit regulator scheme, additional performance can be gained by disabling non-relevant limit regulators
Modeling and analysis of advanced binary cycles
Energy Technology Data Exchange (ETDEWEB)
Gawlik, K.
1997-12-31
A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.
Corneal modeling for analysis of photorefractive keratectomy
Della Vecchia, Michael A.; Lamkin-Kennard, Kathleen
1997-05-01
Procedurally, excimer photorefractive keratectomy is based on the refractive correction of composite spherical and cylindrical ophthalmic errors of the entire eye. These refractive errors are inputted for correction at the corneal plane and for the properly controlled duration and location of laser energy. Topography is usually taken to correspondingly monitor spherical and cylindrical corneorefractive errors. While a corneal topographer provides surface morphologic information, the keratorefractive photoablation is based on the patient's spherical and cylindrical spectacle correction. Topography is at present not directly part of the procedural deterministic parameters. Examination of how corneal curvature at each of the keratometric reference loci affect the shape of the resultant corneal photoablated surface may enhance the accuracy of the desired correction. The objective of this study was to develop a methodology to utilize corneal topography for construction of models depicting pre- and post-operative keratomorphology for analysis of photorefractive keratectomy. Multiple types of models were developed then recreated in optical design software for examination of focal lengths and other optical characteristics. The corneal models were developed using data extracted from the TMS I corneal modeling system (Computed Anatomy, New York, NY). The TMS I does not allow for manipulation of data or differentiation of pre- and post-operative surfaces within its platform, thus models needed to be created for analysis. The data were imported into Matlab where 3D models, surface meshes, and contour plots were created. The data used to generate the models were pre- and post-operative curvatures, heights from the corneal apes, and x-y positions at 6400 locations on the corneal surface. Outlying non-contributory points were eliminated through statistical operations. Pre- and post- operative models were analyzed to obtain the resultant changes in the corneal surfaces during PRK
Model reduction using a posteriori analysis
Whiteley, Jonathan P.
2010-01-01
Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.
Model reduction using a posteriori analysis
Whiteley, Jonathan P.
2010-05-01
Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.
Human eyeball model reconstruction and quantitative analysis.
Xing, Qi; Wei, Qi
2014-01-01
Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.
Davies, Patrick Laurie
2014-01-01
Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...
Modeling Analysis For Grout Hopper Waste Tank
International Nuclear Information System (INIS)
Lee, S.
2012-01-01
The Saltstone facility at Savannah River Site (SRS) has a grout hopper tank to provide agitator stirring of the Saltstone feed materials. The tank has about 300 gallon capacity to provide a larger working volume for the grout nuclear waste slurry to be held in case of a process upset, and it is equipped with a mechanical agitator, which is intended to keep the grout in motion and agitated so that it won't start to set up. The primary objective of the work was to evaluate the flow performance for mechanical agitators to prevent vortex pull-through for an adequate stirring of the feed materials and to estimate an agitator speed which provides acceptable flow performance with a 45 o pitched four-blade agitator. In addition, the power consumption required for the agitator operation was estimated. The modeling calculations were performed by taking two steps of the Computational Fluid Dynamics (CFD) modeling approach. As a first step, a simple single-stage agitator model with 45 o pitched propeller blades was developed for the initial scoping analysis of the flow pattern behaviors for a range of different operating conditions. Based on the initial phase-1 results, the phase-2 model with a two-stage agitator was developed for the final performance evaluations. A series of sensitivity calculations for different designs of agitators and operating conditions have been performed to investigate the impact of key parameters on the grout hydraulic performance in a 300-gallon hopper tank. For the analysis, viscous shear was modeled by using the Bingham plastic approximation. Steady state analyses with a two-equation turbulence model were performed. All analyses were based on three-dimensional results. Recommended operational guidance was developed by using the basic concept that local shear rate profiles and flow patterns can be used as a measure of hydraulic performance and spatial stirring. Flow patterns were estimated by a Lagrangian integration technique along the flow paths
Modeling and Hazard Analysis Using STPA
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis
Automating risk analysis of software design models.
Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P
2014-01-01
The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.
Gentrification and models for real estate analysis
Directory of Open Access Journals (Sweden)
Gianfranco Brusa
2013-08-01
Full Text Available This research propose a deep analysis of Milanese real estate market, based on data supplied by three real estate organizations; gentrification appears in some neighborhoods, such as Tortona, Porta Genova, Bovisa, Isola Garibaldi: the latest is the subject of the final analysis, by surveying of physical and social state of the area. The survey takes place in two periods (2003 and 2009 to compare the evolution of gentrification. The results of surveys has been employed in a simulation by multi-agent system model, to foresee long term evolution of the phenomenon. These neighborhood micro-indicators allow to put in evidence actual trends, conditioning a local real estate market, which can translate themselves in phenomena such as gentrification. In present analysis, the employ of cellular automata models applied to a neighborhood in Milan (Isola Garibaldi produced the dynamic simulation of gentrification trend during a very long time: the cyclical phenomenon (one loop holds a period of twenty – thirty years appears sometimes during a theoretical time of 100 – 120 – 150 years. Simulation of long period scenarios by multi-agent systems and cellular automata provides estimator with powerful tool, without limits in implementing it, able to support him in appraisal judge. It stands also to reason that such a tool can sustain urban planning and related evaluation processes.
Dynamical system analysis of interacting models
Carneiro, S.; Borges, H. A.
2018-01-01
We perform a dynamical system analysis of a cosmological model with linear dependence between the vacuum density and the Hubble parameter, with constant-rate creation of dark matter. We show that the de Sitter spacetime is an asymptotically stable critical point, future limit of any expanding solution. Our analysis also shows that the Minkowski spacetime is an unstable critical point, which eventually collapses to a singularity. In this way, such a prescription for the vacuum decay not only predicts the correct future de Sitter limit, but also forbids the existence of a stable Minkowski universe. We also study the effect of matter creation on the growth of structures and their peculiar velocities, showing that it is inside the current errors of redshift space distortions observations.
Analysis of pilgrim dark energy models
Energy Technology Data Exchange (ETDEWEB)
Sharif, M.; Jawad, Abdul [University of the Punjab, Department of Mathematics, Lahore (Pakistan)
2013-04-15
The proposal of pilgrim dark energy is based on the idea that phantom dark energy possesses enough resistive force to preclude black hole formation. We work on this proposal by choosing an interacting framework with cold dark matter and three cutoffs such as Hubble as well as event horizon and conformal age of the universe. We present a graphical analysis and focus our study on the pilgrim dark energy as well as interacting parameters. It is found that these parameters play an effective role on the equation of state parameter for exploring the phantom region of the universe. We also make the analysis of {omega}-{omega}' and point out freezing region in the {omega}-{omega}' plane. Finally, it turns out that the {Lambda}CDM is achieved in the statefinders plane for all models. (orig.)
Model reduction by weighted Component Cost Analysis
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Plasma brake model for preliminary mission analysis
Orsini, Leonardo; Niccolai, Lorenzo; Mengali, Giovanni; Quarta, Alessandro A.
2018-03-01
Plasma brake is an innovative propellantless propulsion system concept that exploits the Coulomb collisions between a charged tether and the ions in the surrounding environment (typically, the ionosphere) to generate an electrostatic force orthogonal to the tether direction. Previous studies on the plasma brake effect have emphasized the existence of a number of different parameters necessary to obtain an accurate description of the propulsive acceleration from a physical viewpoint. The aim of this work is to discuss an analytical model capable of estimating, with the accuracy required by a preliminary mission analysis, the performance of a spacecraft equipped with a plasma brake in a (near-circular) low Earth orbit. The simplified mathematical model is first validated through numerical simulations, and is then used to evaluate the plasma brake performance in some typical mission scenarios, in order to quantify the influence of the system parameters on the mission performance index.
Coletta, Vincent P.; Evans, Jonathan
2008-10-01
We analyze the motion of a gravity powered model race car on a downhill track of variable slope. Using a simple algebraic function to approximate the height of the track as a function of the distance along the track, and taking account of the rotational energy of the wheels, rolling friction, and air resistance, we obtain analytic expressions for the velocity and time of the car as functions of the distance traveled along the track. Photogates are used to measure the time at selected points along the track, and the measured values are in excellent agreement with the values predicted from theory. The design and analysis of model race cars provides a good application of principles of mechanics and suggests interesting projects for classes in introductory and intermediate mechanics.
Stability Analysis of the Embankment Model
Directory of Open Access Journals (Sweden)
G.S. Gopalakrishna
2009-01-01
Full Text Available In analysis of embankment model affected by dynamic force, employment of shaking table is a scientific way in assessment of earthquake behavior. This work focused on saturated loose sandy foundation and enbankment. The results generated through the pore pressure sensors indicated pore water pressure playing main role in creation of liquefaction and stability of the system, and also revealed deformation, settlement, liquefaction intensity and time stability of system in direct correlation with the strength and characteristics of soil. One of the economical methods in stabilization of soil foundation is improvement of some part soil foundation.
Non standard analysis, polymer models, quantum fields
International Nuclear Information System (INIS)
Albeverio, S.
1984-01-01
We give an elementary introduction to non standard analysis and its applications to the theory of stochastic processes. This is based on a joint book with J.E. Fenstad, R. Hoeegh-Krohn and T. Lindstroeem. In particular we give a discussion of an hyperfinite theory of Dirichlet forms with applications to the study of the Hamiltonian for a quantum mechanical particle in the potential created by a polymer. We also discuss new results on the existence of attractive polymer measures in dimension d 1 2 phi 2 2 )sub(d)-model of interacting quantum fields. (orig.)
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Fluctuation microscopy analysis of amorphous silicon models
Energy Technology Data Exchange (ETDEWEB)
Gibson, J.M., E-mail: jmgibson@fsu.edu [Northeastern University, Department of Physics, Boston MA 02115 (United States); FAMU/FSU Joint College of Engineering, 225 Pottsdamer Street, Tallahassee, FL 32310 (United States); Treacy, M.M.J. [Arizona State University, Department of Physics, Tempe AZ 85287 (United States)
2017-05-15
Highlights: • Studied competing computer models for amorphous silicon and simulated fluctuation microscopy data. • Show that only paracrystalline/random network composite can fit published data. • Specifically show that pure random network or random network with void models do not fit available data. • Identify a new means to measure volume fraction of ordered material. • Identify unreported limitations of the Debye model for simulating fluctuation microscopy data. - Abstract: Using computer-generated models we discuss the use of fluctuation electron microscopy (FEM) to identify the structure of amorphous silicon. We show that a combination of variable resolution FEM to measure the correlation length, with correlograph analysis to obtain the structural motif, can pin down structural correlations. We introduce the method of correlograph variance as a promising means of independently measuring the volume fraction of a paracrystalline composite. From comparisons with published data, we affirm that only a composite material of paracrystalline and continuous random network that is substantially paracrystalline could explain the existing experimental data, and point the way to more precise measurements on amorphous semiconductors. The results are of general interest for other classes of disordered materials.
Fluctuation microscopy analysis of amorphous silicon models
International Nuclear Information System (INIS)
Gibson, J.M.; Treacy, M.M.J.
2017-01-01
Highlights: • Studied competing computer models for amorphous silicon and simulated fluctuation microscopy data. • Show that only paracrystalline/random network composite can fit published data. • Specifically show that pure random network or random network with void models do not fit available data. • Identify a new means to measure volume fraction of ordered material. • Identify unreported limitations of the Debye model for simulating fluctuation microscopy data. - Abstract: Using computer-generated models we discuss the use of fluctuation electron microscopy (FEM) to identify the structure of amorphous silicon. We show that a combination of variable resolution FEM to measure the correlation length, with correlograph analysis to obtain the structural motif, can pin down structural correlations. We introduce the method of correlograph variance as a promising means of independently measuring the volume fraction of a paracrystalline composite. From comparisons with published data, we affirm that only a composite material of paracrystalline and continuous random network that is substantially paracrystalline could explain the existing experimental data, and point the way to more precise measurements on amorphous semiconductors. The results are of general interest for other classes of disordered materials.
Modeling and analysis of a resonant nanosystem
Calvert, Scott L.
The majority of investigations into nanoelectromechanical resonators focus on a single area of the resonator's function. This focus varies from the development of a model for a beam's vibration, to the modeling of electrostatic forces, to a qualitative explanation of experimentally-obtained currents. Despite these efforts, there remains a gap between these works, and the level of sophistication needed to truly design nanoresonant systems for efficient commercial use. Towards this end, a comprehensive system model for both a nanobeam resonator and its related experimental setup is proposed. Furthermore, a simulation arrangement is suggested as a method for facilitating the study of the system-level behavior of these devices in a variety of cases that could not be easily obtained experimentally or analytically. The dynamics driving the nanoresonator's motion, as well as the electrical interactions influencing the forcing and output of the system, are modeled, experimentally validated, and studied. The model seeks to develop both a simple circuit representation of the nanoresonator, and to create a mathematical system that can be used to predict and interpret the observed behavior. Due to the assumptions used to simplify the model to a point of reasonable comprehension, the model is most accurate for small beam deflections near the first eigenmode of the beam. The process and results of an experimental investigation are documented, and compared with a circuit simulation modeling the full test system. The comparison qualitatively proves the functionality of the model, while a numerical analysis serves to validate the functionality and setup of the circuit simulation. The use of the simulation enables a much broader investigation of both the electrical behavior and the physical device's dynamics. It is used to complement an assessment of the tuning behavior of the system's linear natural frequency by demonstrating the tuning behavior of the full nonlinear response. The
Modelling and analysis of global coal markets
International Nuclear Information System (INIS)
Trueby, Johannes
2013-01-01
The thesis comprises four interrelated essays featuring modelling and analysis of coal markets. Each of the four essays has a dedicated chapter in this thesis. Chapters 2 to 4 have, from a topical perspective, a backward-looking focus and deal with explaining recent market outcomes in the international coal trade. The findings of those essays may serve as guidance for assessing current coal market outcomes as well as expected market outcomes in the near to medium-term future. Chapter 5 has a forward-looking focus and builds a bridge between explaining recent market outcomes and projecting long-term market equilibria. Chapter 2, Strategic Behaviour in International Metallurgical Coal Markets, deals with market conduct of large exporters in the market of coals used in steel-making in the period 2008 to 2010. In this essay I analyse whether prices and trade-flows in the international market for metallurgical coals were subject to non-competitive conduct in the period 2008 to 2010. To do so, I develop mathematical programming models - a Stackelberg model, two varieties of a Cournot model, and a perfect competition model - for computing spatial equilibria in international resource markets. Results are analysed with various statistical measures to assess the prediction accuracy of the models. The results show that real market equilibria cannot be reproduced with a competitive model. However, real market outcomes can be accurately simulated with the non-competitive models, suggesting that market equilibria in the international metallurgical coal trade were subject to the strategic behaviour of coal exporters. Chapter 3 and chapter 4 deal with market power issues in the steam coal trade in the period 2006 to 2008. Steam coals are typically used to produce steam either for electricity generation or for heating purposes. In Chapter 3 we analyse market behaviour of key exporting countries in the steam coal trade. This chapter features the essay Market Structure Scenarios in
Modelling and analysis of global coal markets
Energy Technology Data Exchange (ETDEWEB)
Trueby, Johannes
2013-01-17
The thesis comprises four interrelated essays featuring modelling and analysis of coal markets. Each of the four essays has a dedicated chapter in this thesis. Chapters 2 to 4 have, from a topical perspective, a backward-looking focus and deal with explaining recent market outcomes in the international coal trade. The findings of those essays may serve as guidance for assessing current coal market outcomes as well as expected market outcomes in the near to medium-term future. Chapter 5 has a forward-looking focus and builds a bridge between explaining recent market outcomes and projecting long-term market equilibria. Chapter 2, Strategic Behaviour in International Metallurgical Coal Markets, deals with market conduct of large exporters in the market of coals used in steel-making in the period 2008 to 2010. In this essay I analyse whether prices and trade-flows in the international market for metallurgical coals were subject to non-competitive conduct in the period 2008 to 2010. To do so, I develop mathematical programming models - a Stackelberg model, two varieties of a Cournot model, and a perfect competition model - for computing spatial equilibria in international resource markets. Results are analysed with various statistical measures to assess the prediction accuracy of the models. The results show that real market equilibria cannot be reproduced with a competitive model. However, real market outcomes can be accurately simulated with the non-competitive models, suggesting that market equilibria in the international metallurgical coal trade were subject to the strategic behaviour of coal exporters. Chapter 3 and chapter 4 deal with market power issues in the steam coal trade in the period 2006 to 2008. Steam coals are typically used to produce steam either for electricity generation or for heating purposes. In Chapter 3 we analyse market behaviour of key exporting countries in the steam coal trade. This chapter features the essay Market Structure Scenarios in
On contact modelling in isogeometric analysis
Cardoso, R. P. R.; Adetoro, O. B.
2017-11-01
IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.
Modeling human reliability analysis using MIDAS
International Nuclear Information System (INIS)
Boring, R. L.
2006-01-01
This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)
ANALYSIS MODEL FOR RETURN ON CAPITAL EMPLOYED
Directory of Open Access Journals (Sweden)
BURJA CAMELIA
2013-02-01
Full Text Available At the microeconomic level, the appreciation of the capitals’ profitability is a very complex action which is ofinterest for stakeholders. This study has as main purpose to extend the traditional analysis model for the capitals’profitability, based on the ratio “Return on capital employed”. In line with it the objectives of this work aim theidentification of factors that exert an influence on the capital’s profitability utilized by a company and the measurementof their contribution in the manifestation of the phenomenon. The proposed analysis model is validated on the use caseof a representative company from the agricultural sector. The results obtained reveal that in a company there are somefactors which can act positively on the capitals’ profitability: capital turnover, sales efficiency, increase the share ofsales in the total revenues, improvement of the expenses’ efficiency. The findings are useful both for the decisionmakingfactors in substantiating the economic strategies and for the capital owners who are interested in efficiency oftheir investments.
DEFF Research Database (Denmark)
Raket, Lars Lau
We propose a direction it the field of statistics which we will call functional object analysis. This subfields considers the analysis of functional objects defined on continuous domains. In this setting we will focus on model-based statistics, with a particularly emphasis on mixed......-effect formulations, where the observed functional signal is assumed to consist of both fixed and random functional effects. This thesis takes the initial steps toward the development of likelihood-based methodology for functional objects. We first consider analysis of functional data defined on high...
Data analysis and source modelling for LISA
International Nuclear Information System (INIS)
Shang, Yu
2014-01-01
The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Growth models and analysis of development
Energy Technology Data Exchange (ETDEWEB)
Mathur, G
1979-10-01
This paper deals with remnants of neoclassical elements in Keynesian and post-Keynesian thought, and attempts to demonstrate that the elimination of these elements from our modes of thinking would not impoverish economic analysis as a means of solving real problems. In the Keynesian analysis the causation from investment to savings is exhibited in terms of income determination. When put in terms of a capital-theory model, the vector of savings is represented in two ways: real savings and counterpart real savings. The former coincides with the investment vector and the latter with the vector of consumption goods foregone for diverting resources towards equipment making. Thus the Keynesian causation in capital theory terms makes the concept of national savings as an independent variable redudant. The Robinsonian causation in a golden age with full employment and its reversal of direction in a steady state with non-employment are then considered. But in each of these, variables like rate of savings and output/capital ratio are found to be dormant variables. They are termed as null variables which, being of no account in both full-employment and unemployment situations, could, without loss, be deleted from the repertory of analytical tools. The Harrod formula of warranted rate of growth, when put in causal form, thus becomes a redundant portion of economics of growth. The real determinants of the growth rate and real wage rate on which the analysis of growth or of development should be based, are also depicted.
Berg, Vivian; Nøst, Therese Haugdahl; Hansen, Solrunn; Elverland, Astrid; Veyhe, Anna-Sofía; Jorde, Rolf; Odland, Jon Øyvind; Sandanger, Torkjel Manning
2015-04-01
The mechanisms involved in thyroid homeostasis are complex, and perfluoroalkyl substances (PFASs) have been indicated to interfere at several levels in this endocrine system. Disruption of the maternal thyroid homeostasis during early pregnancy is of particular concern, where subclinical changes in maternal thyroid hormones (THs) may affect embryonic and foetal development. The present study investigated associations between THs, thyroid binding proteins (TH-BPs) and PFAS concentrations in pregnant women from Northern Norway. Women participating in The Northern Norway Mother-and-Child contaminant Cohort Study (MISA) donated a blood sample at three visits related to their pregnancy and postpartum period (during the second trimester, 3 days and 6 weeks after delivery) in the period 2007-2009. Participants were assigned to quartiles according to PFAS concentrations during the second trimester and mixed effects linear models were used to investigate potential associations between PFASs and repeated measurements of THs, TH-BPs, thyroxin binding capacity and thyroid peroxidase antibodies (anti-TPOs). Women within the highest perfluorooctane sulfonate (PFOS) quartile had 24% higher mean concentrations of thyroid stimulating hormone (TSH) compared to the first quartile at all sampling points. Women within the highest quartiles of perfluorodecanoate (PFDA) had 4% lower mean concentrations of triiodothyronine (T3) and women within the highest quartile of perfluoroundecanoate (PFUnDA) had 3% lower mean concentrations of free triiodothyronine (FT3). Further, the difference in concentrations and the changes between three time points were the same for the PFAS quartiles. Thyroxin binding capacity was associated with all the THs and TH-BPs, and was selected as a holistic adjustment for individual changes in TH homeostasis during pregnancy. Finally, adjusting for maternal iodine status did not influence the model predictions. Findings in the present study suggest modifications of
National Research Council Canada - National Science Library
Piskator, Gene
1998-01-01
...) model and to develop a Data Envelopment Analysis (DEA) modeling strategy. First, the FAARR model was verified using a simulation of a known production function and validated using sensitivity analysis and ex-post forecasts...
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Visual behaviour analysis and driver cognitive model
Energy Technology Data Exchange (ETDEWEB)
Baujon, J.; Basset, M.; Gissinger, G.L. [Mulhouse Univ., (France). MIPS/MIAM Lab.
2001-07-01
Recent studies on driver behaviour have shown that perception - mainly visual but also proprioceptive perception - plays a key role in the ''driver-vehicle-road'' system and so considerably affects the driver's decision making. Within the framework of the behaviour analysis and studies low-cost system (BASIL), this paper presents a correlative, qualitative and quantitative study, comparing the information given by visual perception and by the trajectory followed. This information will help to obtain a cognitive model of the Rasmussen type according to different driver classes. Many experiments in real driving situations have been carried out for different driver classes and for a given trajectory profile, using a test vehicle and innovative, specially designed, real-time tools, such as the vision system or the positioning module. (orig.)
PCA: Principal Component Analysis for spectra modeling
Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas
2012-07-01
The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.
Structured analysis and modeling of complex systems
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Directory of Open Access Journals (Sweden)
X. Tang
2016-05-01
Full Text Available This study investigates a cross-variable ozone data assimilation (DA method based on an ensemble Kalman filter (EnKF that has been used in the companion study to improve ozone forecasts over Beijing and surrounding areas. The main purpose is to delve into the impacts of the cross-variable adjustment of nitrogen oxide (NOx emissions on the nitrogen dioxide (NO2 forecasts over this region during the 2008 Beijing Olympic Games. A mixed effect on the NO2 forecasts was observed through application of the cross-variable assimilation approach in the real-data assimilation (RDA experiments. The method improved the NO2 forecasts over almost half of the urban sites with reductions of the root mean square errors (RMSEs by 15–36 % in contrast to big increases of the RMSEs over other urban stations by 56–239 %. Over the urban stations with negative DA impacts, improvement of the NO2 forecasts (with 7 % reduction of the RMSEs was noticed at night and in the morning versus significant deterioration during daytime (with 190 % increase of the RMSEs, suggesting that the negative data assimilation impacts mainly occurred during daytime. Ideal-data assimilation (IDA experiments with a box model and the same cross-variable assimilation method confirmed the mixed effects found in the RDA experiments. In the same way, NOx emission estimation was improved at night and in the morning even under large biases in the prior emission, while it deteriorated during daytime (except for the case of minor errors in the prior emission. The mixed effects observed in the cross-variable data assimilation, i.e., positive data assimilation impacts on NO2 forecasts over some urban sites, negative data assimilation impacts over the other urban sites, and weak data assimilation impacts over suburban sites, highlighted the limitations of the EnKF under strong nonlinear relationships between chemical variables. Under strong nonlinearity between daytime ozone concentrations and
Early Start DENVER Model: A Meta - analysis
Directory of Open Access Journals (Sweden)
Jane P. Canoy
2015-11-01
Full Text Available Each child with Autism Spectrum Disorder has different symptoms, skills and types of impairment or disorder with other children. This is why the word “spectrum” is included in this disorder. Eapen, Crncec, and Walter, 2013 claimed that there was an emerging evidence that early interventions gives the greatest capacity of child’s development during their first years of life as “brain plasticity” are high during this period. With this, the only intervention program model for children as young as 18 months that has been validated in a randomized clinical trial is “Early Start Denver Model” (ESDM. This study aimed to determine the effectiveness of the outcome of “Early Start Denver Model” (ESDM towards young children with Autism Spectrum Disorders. This study made use of meta-analysis method. In this study, the researcher utilized studies related to “Early Start Denver Model (ESDM” which is published in a refereed journal which are all available online. There were five studies included which totals 149 children exposed to ESDM. To examine the “pooled effects” of ESDM in a variety of outcomes, a meta-analytic procedure was performed after the extraction of data of the concrete outcomes. Comprehensive Meta Analysis Version 3.3.070 was used to analyze the data. The effectiveness of the outcome of “Early Start Denver Model” towards young children with Autism Spectrum Disorders (ASD highly depends on the intensity of intervention and the younger child age. This study would provide the basis in effectively implementing an early intervention to children with autism such as the “Early Start Denver Model” (ESDM that would show great outcome effects to those children that has “Autism Spectrum Disorder”.
Modelling structural systems for transient response analysis
International Nuclear Information System (INIS)
Melosh, R.J.
1975-01-01
This paper introduces and reports success of a direct means of determining the time periods in which a structural system behaves as a linear system. Numerical results are based on post fracture transient analyses of simplified nuclear piping systems. Knowledge of the linear response ranges will lead to improved analysis-test correlation and more efficient analyses. It permits direct use of data from physical tests in analysis and simplication of the analytical model and interpretation of its behavior. The paper presents a procedure for deducing linearity based on transient responses. Given the forcing functions and responses of discrete points of the system at various times, the process produces evidence of linearity and quantifies an adequate set of equations of motion. Results of use of the process with linear and nonlinear analyses of piping systems with damping illustrate its success. Results cover the application to data from mathematical system responses. The process is successfull with mathematical models. In loading ranges in which all modes are excited, eight digit accuracy of predictions are obtained from the equations of motion deduced. Small changes (less than 0.01%) in the norm of the transfer matrices are produced by manipulation errors for linear systems yielding evidence that nonlinearity is easily distinguished. Significant changes (greater than five %) are coincident with relatively large norms of the equilibrium correction vector in nonlinear analyses. The paper shows that deducing linearity and, when admissible, quantifying linear equations of motion from transient response data for piping systems can be achieved with accuracy comparable to that of response data
Energy Technology Data Exchange (ETDEWEB)
Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.
Development of hydrogen combustion analysis model
Energy Technology Data Exchange (ETDEWEB)
Lim, Tae Jin; Lee, K. D.; Kim, S. N. [Soongsil University, Seoul (Korea, Republic of); Hong, J. S.; Kwon, H. Y. [Seoul National Polytechnic University, Seoul (Korea, Republic of); Kim, Y. B.; Kim, J. S. [Seoul National University, Seoul (Korea, Republic of)
1997-07-01
The objectives of this project is to construct a credible DB for component reliability by developing methodologies and computer codes for assessing component independent failure and common cause failure probability, incorporating applicability and dependency of the data. In addition to this, the ultimate goal is to systematize all the analysis procedures so as to provide plans for preventing component failures by employing flexible tools for the change of specific plant or data sources. For the first subject, we construct a DB for similarity index and dependence matrix and propose a systematic procedure for data analysis by investigating the similarity and redundancy of the generic data sources. Next, we develop a computer code for this procedure and construct reliability data base for major components. The second subject is focused on developing CCF procedure for assessing the plant specific defense ability, rather than developing another CCF model. We propose a procedure and computer code for estimating CCF event probability by incorporating plant specific defensive measure. 116 refs., 25 tabs., 24 figs. (author)
Talking Cure Models: A Framework of Analysis
Directory of Open Access Journals (Sweden)
Christopher Marx
2017-09-01
Full Text Available Psychotherapy is commonly described as a “talking cure,” a treatment method that operates through linguistic action and interaction. The operative specifics of therapeutic language use, however, are insufficiently understood, mainly due to a multitude of disparate approaches that advance different notions of what “talking” means and what “cure” implies in the respective context. Accordingly, a clarification of the basic theoretical structure of “talking cure models,” i.e., models that describe therapeutic processes with a focus on language use, is a desideratum of language-oriented psychotherapy research. Against this background the present paper suggests a theoretical framework of analysis which distinguishes four basic components of “talking cure models”: (1 a foundational theory (which suggests how linguistic activity can affect and transform human experience, (2 an experiential problem state (which defines the problem or pathology of the patient, (3 a curative linguistic activity (which defines linguistic activities that are supposed to effectuate a curative transformation of the experiential problem state, and (4 a change mechanism (which defines the processes and effects involved in such transformations. The purpose of the framework is to establish a terminological foundation that allows for systematically reconstructing basic properties and operative mechanisms of “talking cure models.” To demonstrate the applicability and utility of the framework, five distinct “talking cure models” which spell out the details of curative “talking” processes in terms of (1 catharsis, (2 symbolization, (3 narrative, (4 metaphor, and (5 neurocognitive inhibition are introduced and discussed in terms of the framework components. In summary, we hope that our framework will prove useful for the objective of clarifying the theoretical underpinnings of language-oriented psychotherapy research and help to establish a more
Statistical Analysis and Modelling of Olkiluoto Structures
International Nuclear Information System (INIS)
Hellae, P.; Vaittinen, T.; Saksa, P.; Nummela, J.
2004-11-01
Posiva Oy is carrying out investigations for the disposal of the spent nuclear fuel at the Olkiluoto site in SW Finland. The investigations have focused on the central part of the island. The layout design of the entire repository requires characterization of notably larger areas and must rely at least at the current stage on borehole information from a rather sparse network and on the geophysical soundings providing information outside and between the holes. In this work, the structural data according to the current version of the Olkiluoto bedrock model is analyzed. The bedrock model relies much on the borehole data although results of the seismic surveys and, for example, pumping tests are used in determining the orientation and continuation of the structures. Especially in the analysis, questions related to the frequency of structures and size of the structures are discussed. The structures observed in the boreholes are mainly dipping gently to the southeast. About 9 % of the sample length belongs to structures. The proportion is higher in the upper parts of the rock. The number of fracture and crushed zones seems not to depend greatly on the depth, whereas the hydraulic features concentrate on the depth range above -100 m. Below level -300 m, the hydraulic conductivity occurs in connection of fractured zones. Especially the hydraulic features, but also fracture and crushed zones often occur in groups. The frequency of the structure (area of structures per total volume) is estimated to be of the order of 1/100m. The size of the local structures was estimated by calculating the intersection of the zone to the nearest borehole where the zone has not been detected. Stochastic models using the Fracman software by Golder Associates were generated based on the bedrock model data complemented with the magnetic ground survey data. The seismic surveys (from boreholes KR5, KR13, KR14, and KR19) were used as alternative input data. The generated models were tested by
Linking advanced fracture models to structural analysis
Energy Technology Data Exchange (ETDEWEB)
Chiesa, Matteo
2001-07-01
Shell structures with defects occur in many situations. The defects are usually introduced during the welding process necessary for joining different parts of the structure. Higher utilization of structural materials leads to a need for accurate numerical tools for reliable prediction of structural response. The direct discretization of the cracked shell structure with solid finite elements in order to perform an integrity assessment of the structure in question leads to large size problems, and makes such analysis infeasible in structural application. In this study a link between local material models and structural analysis is outlined. An ''ad hoc'' element formulation is used in order to connect complex material models to the finite element framework used for structural analysis. An improved elasto-plastic line spring finite element formulation, used in order to take cracks into account, is linked to shell elements which are further linked to beam elements. In this way one obtain a global model of the shell structure that also accounts for local flexibilities and fractures due to defects. An important advantage with such an approach is a direct fracture mechanics assessment e.g. via computed J-integral or CTOD. A recent development in this approach is the notion of two-parameter fracture assessment. This means that the crack tip stress tri-axiality (constraint) is employed in determining the corresponding fracture toughness, giving a much more realistic capacity of cracked structures. The present thesis is organized in six research articles and an introductory chapter that reviews important background literature related to this work. Paper I and II address the performance of shell and line spring finite elements as a cost effective tool for performing the numerical calculation needed to perform a fracture assessment. In Paper II a failure assessment, based on the testing of a constraint-corrected fracture mechanics specimen under tension, is
Modeling and analysis of solar distributed generation
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum
Aircraft vulnerability analysis by modeling and simulation
Willers, Cornelius J.; Willers, Maria S.; de Waal, Alta
2014-10-01
Infrared missiles pose a significant threat to civilian and military aviation. ManPADS missiles are especially dangerous in the hands of rogue and undisciplined forces. Yet, not all the launched missiles hit their targets; the miss being either attributable to misuse of the weapon or to missile performance restrictions. This paper analyses some of the factors affecting aircraft vulnerability and demonstrates a structured analysis of the risk and aircraft vulnerability problem. The aircraft-missile engagement is a complex series of events, many of which are only partially understood. Aircraft and missile designers focus on the optimal design and performance of their respective systems, often testing only in a limited set of scenarios. Most missiles react to the contrast intensity, but the variability of the background is rarely considered. Finally, the vulnerability of the aircraft depends jointly on the missile's performance and the doctrine governing the missile's launch. These factors are considered in a holistic investigation. The view direction, altitude, time of day, sun position, latitude/longitude and terrain determine the background against which the aircraft is observed. Especially high gradients in sky radiance occur around the sun and on the horizon. This paper considers uncluttered background scenes (uniform terrain and clear sky) and presents examples of background radiance at all view angles across a sphere around the sensor. A detailed geometrical and spatially distributed radiometric model is used to model the aircraft. This model provides the signature at all possible view angles across the sphere around the aircraft. The signature is determined in absolute terms (no background) and in contrast terms (with background). It is shown that the background significantly affects the contrast signature as observed by the missile sensor. A simplified missile model is constructed by defining the thrust and mass profiles, maximum seeker tracking rate, maximum
Model Based Analysis and Test Generation for Flight Software
Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep
2009-01-01
We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.
Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models
CSIR Research Space (South Africa)
Kruger, FJ
1985-03-01
Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...
Analysis of an innovative business model
Picquendaele, Laetitia
2016-01-01
This master thesis will investigate the freemium business model, raising on the questions: “Why is the freemium business model innovative and what are its success factors?” The aim is to analyse this business model by confronting theory and practice. Therefore, the document begins with a description discussion of the freemium business model. The literature review concludes by determining the success factors of the business model innovation and of the freemium model. The theory in this first p...
Model performance analysis and model validation in logistic regression
Directory of Open Access Journals (Sweden)
Rosa Arboretti Giancristofaro
2007-10-01
Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Analysis of deregulation models; Denryoku shijo jiyuka model no bunseki
Energy Technology Data Exchange (ETDEWEB)
Yajima, M. [Central Research Institute of Electric Power Industry, Tokyo (Japan)
1996-04-01
Trends toward power market deregulation were investigated in Japan and 16 other countries, and various deregulation models were examined and evaluated for their merits and demerits. There are four basic models, that is, franchise bidding model, competitive bidding in power generation model, wholesale wheeling or retail wheeling model, and mandatory pool or voluntary pool model. Power market deregulation has been a global tendency since the second half of the 1970s, with various models adopted by different countries. Out of the above-said models, it is the retail wheeling model and pool models (open access models) that allow the final customer to select power suppliers, and the number of countries adopting these models is increasing. The said models are characterized in that the disintegration of the vertical transmission-distribution integration (separation of distribution service and retail supply service) and the liberation of the retail market are simultaneously accomplished. The pool models, in particular, are enjoying favor because conditions for fair competition have already been prepared and because it is believed high in efficiency. In Japan and France, where importance is attached to atomic power generation, the competitive bidding model is adopted as a means to harmonize the introduction of competition into the source development and power generation sectors. 7 refs., 4 tabs.
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
ERM model analysis for adaptation to hydrological model errors
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Model Theory in Algebra, Analysis and Arithmetic
Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J
2014-01-01
Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics.
Applied data analysis and modeling for energy engineers and scientists
Reddy, T Agami
2011-01-01
""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Evaluation of RCAS Inflow Models for Wind Turbine Analysis
Energy Technology Data Exchange (ETDEWEB)
Tangler, J.; Bir, G.
2004-02-01
The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
Economic analysis model for total energy and economic systems
International Nuclear Information System (INIS)
Shoji, Katsuhiko; Yasukawa, Shigeru; Sato, Osamu
1980-09-01
This report describes framing an economic analysis model developed as a tool of total energy systems. To prospect and analyze future energy systems, it is important to analyze the relation between energy system and economic structure. We prepared an economic analysis model which was suited for this purpose. Our model marks that we can analyze in more detail energy related matters than other economic ones, and can forecast long-term economic progress rather than short-term economic fluctuation. From view point of economics, our model is longterm multi-sectoral economic analysis model of open Leontief type. Our model gave us appropriate results for fitting test and forecasting estimation. (author)
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Statistical models for competing risk analysis
International Nuclear Information System (INIS)
Sather, H.N.
1976-08-01
Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined
Qualitative Analysis of Integration Adapter Modeling
Ritter, Daniel; Holzleitner, Manuel
2015-01-01
Integration Adapters are a fundamental part of an integration system, since they provide (business) applications access to its messaging channel. However, their modeling and configuration remain under-represented. In previous work, the integration control and data flow syntax and semantics have been expressed in the Business Process Model and Notation (BPMN) as a semantic model for message-based integration, while adapter and the related quality of service modeling were left for further studi...
COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS
Sandeep Kaur*
2017-01-01
No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need
Application of parameters space analysis tools for empirical model validation
Energy Technology Data Exchange (ETDEWEB)
Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)
2004-01-01
A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)
Biomass Scenario Model | Energy Analysis | NREL
Biomass Scenario Model Biomass Scenario Model The Biomass Scenario Model (BSM) is a unique range of lignocellulosic biomass feedstocks into biofuels. Over the past 25 years, the corn ethanol plant matter (lignocellulosic biomass) to fermentable sugars for the production of fuel ethanol
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank
Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program
Moore, Berrien, III; Sahagian, Dork
1997-01-01
The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.
Analysis of uncertainty in modeling perceived risks
International Nuclear Information System (INIS)
Melnyk, R.; Sandquist, G.M.
2005-01-01
Expanding on a mathematical model developed for quantifying and assessing perceived risks, the distribution functions, variances, and uncertainties associated with estimating the model parameters are quantified. The analytical model permits the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. Those risk perception factors associated with major technical issues are modeled using lognormal probability density functions to span the potentially large uncertainty variations associated with these risk perceptions. The model quantifies the logic of public risk perception and provides an effective means for measuring and responding to perceived risks. (authors)
Advances in power system modelling, control and stability analysis
Milano, Federico
2016-01-01
Advances in Power System Modelling, Control and Stability Analysis captures the variety of new methodologies and technologies that are changing the way modern electric power systems are modelled, simulated and operated.
TIPPtool: Compositional Specification and Analysis of Markovian Performance Models
Hermanns, H.; Halbwachs, N.; Peled, D.; Mertsiotakis, V.; Siegle, M.
1999-01-01
In this short paper we briefly describe a tool which is based on a Markovian stochastic process algebra. The tool offers both model specification and quantitative model analysis in a compositional fashion, wrapped in a userfriendly graphical front-end.
EXPOSURE ANALYSIS MODELING SYSTEM (EXAMS): USER MANUAL AND SYSTEM DOCUMENTATION
The Exposure Analysis Modeling System, first published in 1982 (EPA-600/3-82-023), provides interactive computer software for formulating aquatic ecosystem models and rapidly evaluating the fate, transport, and exposure concentrations of synthetic organic chemicals - pesticides, ...
Modeling Toolkit and Workbook for Defense Analysis Students
National Research Council Canada - National Science Library
Riden, Chad A; Drake, Douglass M
2006-01-01
.... Topics covered include difference equations, systems of difference equations, Lanchester equations, graphical analysis, proportionality, geometric similarity, model fitting, Monte Carlo simulation...
A Hierarchical Visualization Analysis Model of Power Big Data
Li, Yongjie; Wang, Zheng; Hao, Yang
2018-01-01
Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
Heavy traffic analysis of polling models by mean value analysis
Mei, van der R.D.; Winands, E.M.M.
2008-01-01
In this paper we present a new approach to derive heavy-traffic asymptotics for polling models. We consider the classical cyclic polling model with exhaustive or gated service at each queue, and with general service-time and switch-over time distributions, and study its behavior when the load tends
Stochastic Wake Modelling Based on POD Analysis
Directory of Open Access Journals (Sweden)
David Bastine
2018-03-01
Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.
Mineralogic Model (MM3.0) Analysis Model Report
Energy Technology Data Exchange (ETDEWEB)
C. Lum
2002-02-12
The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1
Mineralogic Model (MM3.0) Analysis Model Report
International Nuclear Information System (INIS)
Lum, C.
2002-01-01
The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M and O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M and O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components
The uncertainty analysis of model results a practical guide
Hofer, Eduard
2018-01-01
This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.
Multifractal modelling and 3D lacunarity analysis
International Nuclear Information System (INIS)
Hanen, Akkari; Imen, Bhouri; Asma, Ben Abdallah; Patrick, Dubois; Hedi, Bedoui Mohamed
2009-01-01
This study presents a comparative evaluation of lacunarity of 3D grey level models with different types of inhomogeneity. A new method based on the 'Relative Differential Box Counting' was developed to estimate the lacunarity features of grey level volumes. To validate our method, we generated a set of 3D grey level multifractal models with random, anisotropic and hierarchical properties. Our method gives a lacunarity measurement correlated with the theoretical one and allows a better model classification compared with a classical approach.
On constitutive modelling in finite element analysis
International Nuclear Information System (INIS)
Bathe, K.J.; Snyder, M.D.; Cleary, M.P.
1979-01-01
This compact contains a brief introduction to the problems involved in constitutive modeling as well as an outline of the final paper to be submitted. Attention is focussed on three important areas: (1) the need for using theoretically sound material models and the importance of recognizing the limitations of the models, (2) the problem of developing stable and effective numerical representations of the models, and (3) the necessity for selection of an appropriate finite element mesh that can capture the actual physical response of the complete structure. In the final paper, we will be presenting our recent research results pertaining to each of these problem areas. (orig.)
Hierarchical regression analysis in structural Equation Modeling
de Jong, P.F.
1999-01-01
In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main
Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis
Layton, William J
2012-01-01
This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.
Multi-model analysis in hydrological prediction
Lanthier, M.; Arsenault, R.; Brissette, F.
2017-12-01
Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Model analysis of adaptive car driving behavior
Wewerinke, P.H.
1996-01-01
This paper deals with two modeling approaches to car driving. The first one is a system theoretic approach to describe adaptive human driving behavior. The second approach utilizes neural networks. As an illustrative example the overtaking task is considered and modeled in system theoretic terms.
Wellness Model of Supervision: A Comparative Analysis
Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.
2012-01-01
This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…
Analysis of a Thin Optical Lens Model
Ivchenko, Vladimir V.
2011-01-01
In this article a thin optical lens model is considered. It is shown that the limits of its applicability are determined not only by the ratio between the thickness of the lens and the modules of the radii of curvature, but above all its geometric type. We have derived the analytical criteria for the applicability of the model for different types…
Extendable linearised adjustment model for deformation analysis
Hiddo Velsink
2015-01-01
Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices
Extendable linearised adjustment model for deformation analysis
Velsink, H.
2015-01-01
This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation
Reusable launch vehicle model uncertainties impact analysis
Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng
2018-03-01
Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).
Analysis of Modeling Parameters on Threaded Screws.
Energy Technology Data Exchange (ETDEWEB)
Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Analysis of nonlinear systems using ARMA [autoregressive moving average] models
International Nuclear Information System (INIS)
Hunter, N.F. Jr.
1990-01-01
While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs
Application of autoregressive moving average model in reactor noise analysis
International Nuclear Information System (INIS)
Tran Dinh Tri
1993-01-01
The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)
Evaluation of Thermal Margin Analysis Models for SMART
International Nuclear Information System (INIS)
Seo, Kyong Won; Kwon, Hyuk; Hwang, Dae Hyun
2011-01-01
Thermal margin of SMART would be analyzed by three different methods. The first method is subchannel analysis by MATRA-S code and it would be a reference data for the other two methods. The second method is an on-line few channel analysis by FAST code that would be integrated into SCOPS/SCOMS. The last one is a single channel module analysis by safety analysis. Several thermal margin analysis models for SMART reactor core by subchannel analysis were setup and tested. We adopted a strategy of single stage analysis for thermal analysis of SMART reactor core. The model should represent characteristics of the SMART reactor core including hot channel. The model should be simple as possible to be evaluated within reasonable time and cost
Probabilistic forward model for electroencephalography source analysis
International Nuclear Information System (INIS)
Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M
2007-01-01
Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates
Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R
2015-08-28
Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed
Aircraft vulnerability analysis by modelling and simulation
CSIR Research Space (South Africa)
Willers, CJ
2014-09-01
Full Text Available attributable to misuse of the weapon or to missile performance restrictions. This paper analyses some of the factors affecting aircraft vulnerability and demonstrates a structured analysis of the risk and aircraft vulnerability problem. The aircraft...
Multifractal modelling and 3D lacunarity analysis
Energy Technology Data Exchange (ETDEWEB)
Hanen, Akkari, E-mail: bettaieb.hanen@topnet.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia); Imen, Bhouri, E-mail: bhouri_imen@yahoo.f [Unite de recherche ondelettes et multifractals, Faculte des sciences (Tunisia); Asma, Ben Abdallah, E-mail: asma.babdallah@cristal.rnu.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia); Patrick, Dubois, E-mail: pdubois@chru-lille.f [INSERM, U 703, Lille (France); Hedi, Bedoui Mohamed, E-mail: medhedi.bedoui@fmm.rnu.t [Laboratoire de biophysique, TIM, Faculte de Medecine (Tunisia)
2009-09-28
This study presents a comparative evaluation of lacunarity of 3D grey level models with different types of inhomogeneity. A new method based on the 'Relative Differential Box Counting' was developed to estimate the lacunarity features of grey level volumes. To validate our method, we generated a set of 3D grey level multifractal models with random, anisotropic and hierarchical properties. Our method gives a lacunarity measurement correlated with the theoretical one and allows a better model classification compared with a classical approach.
Analysis of Brown camera distortion model
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
Geometrical analysis of the interacting boson model
International Nuclear Information System (INIS)
Dieperink, A.E.L.
1983-01-01
The Interacting Boson Model is considered, in relation with geometrical models and the application of mean field techniques to algebraic models, in three lectures. In the first, several methods are reviewed to establish a connection between the algebraic formulation of collective nuclear properties in terms of the group SU(6) and the geometric approach. In the second lecture the geometric interpretation of new degrees of freedom that arise in the neutron-proton IBA is discussed, and in the third one some further applications of algebraic techniques to the calculation of static and dynamic collective properties are presented. (U.K.)
2018-03-29
From: Dajun Tang, Principal Investigator Subj: ONR Grant# N00014-14-1-0239 & N00014-16-1-2371, “TREX 13 Data analysis /Modeling” Encl: (1) Final...Performance/ Technical Report with SF298 The attached enclosures constitute the final technical report for ONR Grant# N00014-14-1-0239 & N00014-16-1-2371...TREX 13 Data analysis /Modeling” cc: Grant & Contract Administrator, APL-UW Office of Sponsor Programs, UW ONR Seattle – Robert Rice and
Economic and Power System Modeling and Analysis | Water Power | NREL
Economic and Power System Modeling and Analysis Economic and Power System Modeling and Analysis technologies, their possible deployment scenarios, and the economic impacts of this deployment. As a research approaches used to estimate direct and indirect economic impacts of offshore renewable energy projects
a finite element model for the analysis of bridge decks
African Journals Online (AJOL)
Dr Obe
A FINITE ELEMENT MODEL FOR THE ANALYSIS OF BRIDGE DECKS. NIGERIAN JOURNAL OF TECHNOLOGY, VOL. 27 NO.1, MARCH 2008. 59. (a) Beam-plate system. (b) T-beam structural model. Fig. 1 Beam-plate structure idealisations. The matrix displacement method of analysis is used. The continuum structure is.
The effect of S-adenosylmethionine on cognitive performance in mice: an animal model meta-analysis.
Directory of Open Access Journals (Sweden)
Sarah E Montgomery
Full Text Available Alzheimer's disease (AD is the most frequently diagnosed form of dementia resulting in cognitive impairment. Many AD mouse studies, using the methyl donor S-adenosylmethionine (SAM, report improved cognitive ability, but conflicting results between and within studies currently exist. To address this, we conducted a meta-analysis to evaluate the effect of SAM on cognitive ability as measured by Y maze performance. As supporting evidence, we include further discussion of improvements in cognitive ability, by SAM, as measured by the Morris water maze (MWM.We conducted a comprehensive literature review up to April 2014 based on searches querying MEDLINE, EMBASE, Web of Science, the Cochrane Library and Proquest Theses and Dissertation databases. We identified three studies containing a total of 12 experiments that met our inclusion criteria and one study for qualitative review. The data from these studies were used to evaluate the effect of SAM on cognitive performance according to two scenarios: 1. SAM supplemented folate deficient (SFD diet compared to a folate deficient (FD diet and 2. SFD diet compared to a nutrient complete (NC diet. Hedge's g was used to calculate effect sizes and mixed effects model meta-regression was used to evaluate moderating factors.Our findings showed that the SFD diet was associated with improvements in cognitive performance. SFD diet mice also had superior cognitive performance compared to mice on an NC diet. Further to this, meta-regression analyses indicated a significant positive effect of study quality score and treatment duration on the effect size estimate for both the FD vs SFD analysis and the SFD vs NC analysis.The findings of this meta-analysis demonstrate efficacy of SAM in acting as a cognitive performance-enhancing agent. As a corollary, SAM may be useful in improving spatial memory in patients suffering from many dementia forms including AD.
Multiattribute shopping models and ridge regression analysis
Timmermans, H.J.P.
1981-01-01
Policy decisions regarding retailing facilities essentially involve multiple attributes of shopping centres. If mathematical shopping models are to contribute to these decision processes, their structure should reflect the multiattribute character of retailing planning. Examination of existing
Automatic terrain modeling using transfinite element analysis
Collier, Nathan; Calo, Victor M.
2010-01-01
An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques
A Petri Nets Model for Blockchain Analysis
Pinna, Andrea; Tonelli, Roberto; Orrú, Matteo; Marchesi, Michele
2017-01-01
A Blockchain is a global shared infrastructure where cryptocurrency transactions among addresses are recorded, validated and made publicly available in a peer- to-peer network. To date the best known and important cryptocurrency is the bitcoin. In this paper we focus on this cryptocurrency and in particular on the modeling of the Bitcoin Blockchain by using the Petri Nets formalism. The proposed model allows us to quickly collect information about identities owning Bitcoin addresses and to re...
Structural dynamic analysis with generalized damping models analysis
Adhikari , Sondipon
2013-01-01
Since Lord Rayleigh introduced the idea of viscous damping in his classic work ""The Theory of Sound"" in 1877, it has become standard practice to use this approach in dynamics, covering a wide range of applications from aerospace to civil engineering. However, in the majority of practical cases this approach is adopted more for mathematical convenience than for modeling the physics of vibration damping. Over the past decade, extensive research has been undertaken on more general ""non-viscous"" damping models and vibration of non-viscously damped systems. This book, along with a related book
Nonlinear Analysis and Modeling of Tires
Noor, Ahmed K.
1996-01-01
The objective of the study was to develop efficient modeling techniques and computational strategies for: (1) predicting the nonlinear response of tires subjected to inflation pressure, mechanical and thermal loads; (2) determining the footprint region, and analyzing the tire pavement contact problem, including the effect of friction; and (3) determining the sensitivity of the tire response (displacements, stresses, strain energy, contact pressures and contact area) to variations in the different material and geometric parameters. Two computational strategies were developed. In the first strategy the tire was modeled by using either a two-dimensional shear flexible mixed shell finite elements or a quasi-three-dimensional solid model. The contact conditions were incorporated into the formulation by using a perturbed Lagrangian approach. A number of model reduction techniques were applied to substantially reduce the number of degrees of freedom used in describing the response outside the contact region. The second strategy exploited the axial symmetry of the undeformed tire, and uses cylindrical coordinates in the development of three-dimensional elements for modeling each of the different parts of the tire cross section. Model reduction techniques are also used with this strategy.
Analysis, Analysis Practices and Implications for Modeling and Simulation
2007-01-01
the Somme, New York: Penguin , 1983. Kent, Glenn A., “Looking Back: Four Decades of Analysis,” Operations Research, Vol. 50, No. 1, 2002, pp. 122–224...to many sources is http://www.saunalahti.fi/ fta /EBO.htm (as of December 18, 2006). Effects-based operations are controversial in some respects (Davis
Atomic mixing effects on high fluence Ge implantation into Si at 40 keV
International Nuclear Information System (INIS)
Gras-Marti, A.; Jimenez-Rodriguez, J.J.; Peon-Fernandez, J.; Rodriguez-Vidal, M.; Tognetti, N.P.; Carter, G.; Nobes, M.J.; Armour, D.G.
1982-01-01
Ion implanted profiles of 40 keV Ge + into Si at fluences ranging from approx. equal to 10 15 ions/cm 2 up to saturation have been measured using the RBS technique. The profiles compare well with the predictions of an analytical model encompasing sputter erosion plus atomic relocation. (orig.)
Analysis and Comparison of Typical Models within Distribution Network Design
DEFF Research Database (Denmark)
Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.
This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....
Models as Tools of Analysis of a Network Organisation
Directory of Open Access Journals (Sweden)
Wojciech Pająk
2013-06-01
Full Text Available The paper presents models which may be applied as tools of analysis of a network organisation. The starting point of the discussion is defining the following terms: supply chain and network organisation. Further parts of the paper present basic assumptions analysis of a network organisation. Then the study characterises the best known models utilised in analysis of a network organisation. The purpose of the article is to define the notion and the essence of network organizations and to present the models used for their analysis.
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Environmental Modeling Center / Marine Modeling and Analysis Branch
weather and climate. Both have a history. Marine Meteorology Group Products Ocean Winds - Satellite Remote announcement list for changes to our products and services. SDM Contact Notes: Ocean Models -- Avichal Mehra Ocean Waves Sea Ice SST Marine Met. Real Time Ocean Forecasting System (RTOFS) Global RTOFS A hybrid
Lovejoy, Andrew E.; Hilburger, Mark W.
2013-01-01
This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.
Tank 5 Model for Sludge Removal Analysis
International Nuclear Information System (INIS)
LEE, SI
2004-01-01
Computational fluid dynamics methods have been used to develop and provide slurry pump operational guidance for sludge heel removal in Tank 5. Flow patterns calculated by the model were used to evaluate the performance of various combinations of operating pumps and their orientation under steady-state indexed and transient oscillation modes. A model used for previous analyses has been updated to add the valve housing distribution piping and pipe clusters of the cooling coil supply system near pump no. 8 to the previous tank Type-I model. In addition, the updated model included twelve concrete support columns. This model would provide a more accurate assessment of sludge removal capabilities. The model focused on removal of the sludge heel located near the wall of Tank 5 using the two new slurry pumps. The models and calculations were based on prototypic tank geometry and expected normal operating conditions as defined by Tank Closure Project Engineering. Computational fluid dynamics models of Tank 5 with different operating conditions were developed using the FLUENT (trademark) code. The modeling results were used to assess the efficiency of sludge suspension and removal operations in the 75-ft tank. The models employed a three-dimensional approach, a two-equation turbulence model, and an approximate representation of flow obstructions. The calculated local velocity was used as a measure of sludge removal and mixing capability. For the simulations, modeling calculations were performed with indexed pump orientations until an optimum flow pattern near the potential location of the sludge heel was established for sludge removal. The calculated results demonstrated that the existing slurry pumps running at 3801 gpm flowrate per nozzle could remove the sludge from the tank with a 101 in liquid level, based on a historical minimum sludge suspension velocity of 2.27 ft/sec. The only exception is the region within maximum 4.5 ft distance from the tank wall boundary at the
Discretization model for nonlinear dynamic analysis of three dimensional structures
International Nuclear Information System (INIS)
Hayashi, Y.
1982-12-01
A discretization model for nonlinear dynamic analysis of three dimensional structures is presented. The discretization is achieved through a three dimensional spring-mass system and the dynamic response obtained by direct integration of the equations of motion using central diferences. First the viability of the model is verified through the analysis of homogeneous linear structures and then its performance in the analysis of structures subjected to impulsive or impact loads, taking into account both geometrical and physical nonlinearities is evaluated. (Author) [pt
Environmental modeling and health risk analysis (ACTS/RISK)
National Research Council Canada - National Science Library
Aral, M. M
2010-01-01
... presents a review of the topics of exposure and health risk analysis. The Analytical Contaminant Transport Analysis System (ACTS) and Health RISK Analysis (RISK) software tools are an integral part of the book and provide computational platforms for all the models discussed herein. The most recent versions of these two softwa...
European Climate - Energy Security Nexus. A model based scenario analysis
International Nuclear Information System (INIS)
Criqui, Patrick; Mima, Silvana
2011-01-01
In this research, we have provided an overview of the climate-security nexus in the European sector through a model based scenario analysis with POLES model. The analysis underline that under stringent climate policies, Europe take advantage of a double dividend in its capacity to develop a new cleaner energy model and in lower vulnerability to potential shocks on the international energy markets. (authors)
Comparative analysis of Goodwin's business cycle models
Antonova, A. O.; Reznik, S.; Todorov, M. D.
2016-10-01
We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.
MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore
2006-05-01
This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.
Applications of model theory to functional analysis
Iovino, Jose
2014-01-01
During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the
CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE
Directory of Open Access Journals (Sweden)
José Luis Bernal Agudo
2015-06-01
Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.
Computational Models for Analysis of Illicit Activities
DEFF Research Database (Denmark)
Nizamani, Sarwat
been explored in this thesis by considering them as epidemic-like processes. A mathematical model has been developed based on differential equations, which studies the dynamics of the issues from the very beginning until the issues cease. This study extends classical models of the spread of epidemics...... to describe the phenomenon of contagious public outrage, which eventually leads to the spread of violence following a disclosure of some unpopular political decisions and/or activity. The results shed a new light on terror activity and provide some hint on how to curb the spreading of violence within...
Modeling and analysis of offshore jacket platform
Digital Repository Service at National Institute of Oceanography (India)
Mohan, P.; Sidhaarth, K.R.A.; SanilKumar, V.
analysis is also able to replicate the details of observed behavior of the designed structure. Based on the axial tension/compression and the bending stresses obtained through simulation, the tubular members are designed and the code unity check is done...
Mathematical modeling and analysis of WEDM machining ...
Indian Academy of Sciences (India)
M P GARG
analysis and optimization of the WEDM process parameters of Inconel 625. The four ... fields such as aerospace, missile, and nuclear industry, very complex and ... piping, the alloy continues to find new application and ... automobile, chemical processing, oil refining, marine, ... Inconel 625 is a material of choice for gas.
Horizontal crash testing and analysis of model flatrols
International Nuclear Information System (INIS)
Dowler, H.J.; Soanes, T.P.T.
1985-01-01
To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)
Modeling issues in nuclear plant fire risk analysis
International Nuclear Information System (INIS)
Siu, N.
1989-01-01
This paper discusses various issues associated with current models for analyzing the risk due to fires in nuclear power plants. Particular emphasis is placed on the fire growth and suppression models, these being unique to the fire portion of the overall risk analysis. Potentially significant modeling improvements are identified; also discussed are a variety of modeling issues where improvements will help the credibility of the analysis, without necessarily changing the computed risk significantly. The mechanistic modeling of fire initiation is identified as a particularly promising improvement for reducing the uncertainties in the predicted risk. 17 refs., 5 figs. 2 tabs
Integrative Analysis of Metabolic Models – from Structure to Dynamics
Energy Technology Data Exchange (ETDEWEB)
Hartmann, Anja, E-mail: hartmann@ipk-gatersleben.de [Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), Gatersleben (Germany); Schreiber, Falk [Monash University, Melbourne, VIC (Australia); Martin-Luther-University Halle-Wittenberg, Halle (Germany)
2015-01-26
The characterization of biological systems with respect to their behavior and functionality based on versatile biochemical interactions is a major challenge. To understand these complex mechanisms at systems level modeling approaches are investigated. Different modeling formalisms allow metabolic models to be analyzed depending on the question to be solved, the biochemical knowledge and the availability of experimental data. Here, we describe a method for an integrative analysis of the structure and dynamics represented by qualitative and quantitative metabolic models. Using various formalisms, the metabolic model is analyzed from different perspectives. Determined structural and dynamic properties are visualized in the context of the metabolic model. Interaction techniques allow the exploration and visual analysis thereby leading to a broader understanding of the behavior and functionality of the underlying biological system. The System Biology Metabolic Model Framework (SBM{sup 2} – Framework) implements the developed method and, as an example, is applied for the integrative analysis of the crop plant potato.
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis
Young, Cristobal; Holsteen, Katherine
2017-01-01
Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…
System Reliability Analysis Capability and Surrogate Model Application in RAVEN
Energy Technology Data Exchange (ETDEWEB)
Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-11-01
This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.
The genetic analysis of repeated measures I: Simplex models
Molenaar, P.C.M.; Boomsma, D.I.
1987-01-01
Extends the simplex model to a model that may be used for the genetic and environmental analysis of covariance (ANCOVA) structures. This "double" simplex structure can be specified as a linear structural relationships model. It is shown that data that give rise to a simplex correlation structure,
Integration of Design and Control Through Model Analysis
DEFF Research Database (Denmark)
Russel, Boris Mariboe; Henriksen, Jens Peter; Jørgensen, Sten Bay
2000-01-01
of the phenomena models representing the process model identify the relationships between the important process and design variables, which help to understand, define and address some of the issues related to integration of design and control issues. The model analysis is highlighted through examples involving...... processes with mass and/or energy recycle. (C) 2000 Elsevier Science Ltd. All rights reserved....
ANALYSIS/MODEL COVER SHEET, MULTISCALE THERMOHYDROLOGIC MODEL
International Nuclear Information System (INIS)
Buscheck, T.A.
2001-01-01
The purpose of the Multiscale Thermohydrologic Model (MSTHM) is to describe the thermohydrologic evolution of the near-field environment (NFE) and engineered barrier system (EBS) throughout the potential high-level nuclear waste repository at Yucca Mountain for a particular engineering design (CRWMS M andO 2000c). The process-level model will provide thermohydrologic (TH) information and data (such as in-drift temperature, relative humidity, liquid saturation, etc.) for use in other technical products. This data is provided throughout the entire repository area as a function of time. The MSTHM couples the Smeared-heat-source Drift-scale Thermal-conduction (SDT), Line-average-heat-source Drift-scale Thermohydrologic (LDTH), Discrete-heat-source Drift-scale Thermal-conduction (DDT), and Smeared-heat-source Mountain-scale Thermal-conduction (SMT) submodels such that the flow of water and water vapor through partially-saturated fractured rock is considered. The MSTHM accounts for 3-D drift-scale and mountain-scale heat flow, repository-scale variability of stratigraphy and infiltration flux, and waste package (WP)-to-WP variability in heat output from WPs. All submodels use the nonisothermal unsaturated-saturated flow and transport (NUFT) simulation code. The MSTHM is implemented in several data-processing steps. The four major steps are: (1) submodel input-file preparation, (2) execution of the four submodel families with the use of the NUFT code, (3) execution of the multiscale thermohydrologic abstraction code (MSTHAC), and (4) binning and post-processing (i.e., graphics preparation) of the output from MSTHAC. Section 6 describes the MSTHM in detail. The objectives of this Analyses and Model Report (AMR) are to investigate near field (NF) and EBS thermohydrologic environments throughout the repository area at various evolution periods, and to provide TH data that may be used in other process model reports
Electromagnetic modeling method for eddy current signal analysis
International Nuclear Information System (INIS)
Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.
2004-10-01
An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs
An Analysis of Student Model Portability
Valdés Aguirre, Benjamín; Ramírez Uresti, Jorge A.; du Boulay, Benedict
2016-01-01
Sharing user information between systems is an area of interest for every field involving personalization. Recommender Systems are more advanced in this aspect than Intelligent Tutoring Systems (ITSs) and Intelligent Learning Environments (ILEs). A reason for this is that the user models of Intelligent Tutoring Systems and Intelligent Learning…
Feature Analysis for Modeling Game Content Quality
DEFF Research Database (Denmark)
Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...
Flood Progression Modelling and Impact Analysis
DEFF Research Database (Denmark)
Mioc, Darka; Anton, François; Nickerson, B.
People living in the lower valley of the St. John River, New Brunswick, Canada, frequently experience flooding when the river overflows its banks during spring ice melt and rain. To better prepare the population of New Brunswick for extreme flooding, we developed a new flood prediction model...
Transient analysis models for nuclear power plants
International Nuclear Information System (INIS)
Agapito, J.R.
1981-01-01
The modelling used for the simulation of the Angra-1 start-up reactor tests, using the RETRAN computer code is presented. Three tests are simulated: a)nuclear power plant trip from 100% of power; b)great power excursions tests and c)'load swing' tests.(E.G.) [pt
Social Ecological Model Analysis for ICT Integration
Zagami, Jason
2013-01-01
ICT integration of teacher preparation programmes was undertaken by the Australian Teaching Teachers for the Future (TTF) project in all 39 Australian teacher education institutions and highlighted the need for guidelines to inform systemic ICT integration approaches. A Social Ecological Model (SEM) was used to positively inform integration…
Quantitative Models and Analysis for Reactive Systems
DEFF Research Database (Denmark)
Thrane, Claus
phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...
Financial Markets Analysis by Probabilistic Fuzzy Modelling
J.H. van den Berg (Jan); W.-M. van den Bergh (Willem-Max); U. Kaymak (Uzay)
2003-01-01
textabstractFor successful trading in financial markets, it is important to develop financial models where one can identify different states of the market for modifying one???s actions. In this paper, we propose to use probabilistic fuzzy systems for this purpose. We concentrate on Takagi???Sugeno
Curricular Change: A Model for Analysis.
Axelrod, Joseph
This interim report on 1 project at the Berkeley Center for Research and Development in Higher Education deals with the construction of a theoretical model of the curricular-instructional subsystem. The relationship between student unrest and the poor quality of education in American colleges has long been evident to educational researchers. The…
Financial markets analysis by probabilistic fuzzy modelling
Berg, van den J.; Kaymak, U.; Bergh, van den W.M.
2003-01-01
For successful trading in financial markets, it is important to develop financial models where one can identify different states of the market for modifying one???s actions. In this paper, we propose to use probabilistic fuzzy systems for this purpose. We concentrate on Takagi???Sugeno (TS)
Premium adjustment: actuarial analysis on epidemiological models ...
African Journals Online (AJOL)
In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes extremely difficult for the insurer since a negative reserve would severely increase its risk of insolvency, ...
TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION
Directory of Open Access Journals (Sweden)
Goran Klepac
2007-12-01
Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.
The case for repeatable analysis with energy economy optimization models
International Nuclear Information System (INIS)
DeCarolis, Joseph F.; Hunter, Kevin; Sreepathi, Sarat
2012-01-01
Energy economy optimization (EEO) models employ formal search techniques to explore the future decision space over several decades in order to deliver policy-relevant insights. EEO models are a critical tool for decision-makers who must make near-term decisions with long-term effects in the face of large future uncertainties. While the number of model-based analyses proliferates, insufficient attention is paid to transparency in model development and application. Given the complex, data-intensive nature of EEO models and the general lack of access to source code and data, many of the assumptions underlying model-based analysis are hidden from external observers. This paper discusses the simplifications and subjective judgments involved in the model building process, which cannot be fully articulated in journal papers, reports, or model documentation. In addition, we argue that for all practical purposes, EEO model-based insights cannot be validated through comparison to real world outcomes. As a result, modelers are left without credible metrics to assess a model's ability to deliver reliable insight. We assert that EEO models should be discoverable through interrogation of publicly available source code and data. In addition, third parties should be able to run a specific model instance in order to independently verify published results. Yet a review of twelve EEO models suggests that in most cases, replication of model results is currently impossible. We provide several recommendations to help develop and sustain a software framework for repeatable model analysis.
Transducer Analysis and ATILA++ Model Development
2016-10-10
the behavior of single crystal piezoelectric devices. OBJECTIVES To carry out this effort, a team consisting of ISEN...recording the vibration displacements on the opposite surface of the sample with a second receiving transducer. A software program is used to curve-fit the ...analysis in a loop until the desired quality of fit is achieved. 12 The technique has, in the past, been successfully used to determine
Three dimensional mathematical model of tooth for finite element analysis
Directory of Open Access Journals (Sweden)
Puškar Tatjana
2010-01-01
Full Text Available Introduction. The mathematical model of the abutment tooth is the starting point of the finite element analysis of stress and deformation of dental structures. The simplest and easiest way is to form a model according to the literature data of dimensions and morphological characteristics of teeth. Our method is based on forming 3D models using standard geometrical forms (objects in programmes for solid modeling. Objective. Forming the mathematical model of abutment of the second upper premolar for finite element analysis of stress and deformation of dental structures. Methods. The abutment tooth has a form of a complex geometric object. It is suitable for modeling in programs for solid modeling SolidWorks. After analyzing the literature data about the morphological characteristics of teeth, we started the modeling dividing the tooth (complex geometric body into simple geometric bodies (cylinder, cone, pyramid,.... Connecting simple geometric bodies together or substricting bodies from the basic body, we formed complex geometric body, tooth. The model is then transferred into Abaqus, a computational programme for finite element analysis. Transferring the data was done by standard file format for transferring 3D models ACIS SAT. Results. Using the programme for solid modeling SolidWorks, we developed three models of abutment of the second maxillary premolar: the model of the intact abutment, the model of the endodontically treated tooth with two remaining cavity walls and the model of the endodontically treated tooth with two remaining walls and inserted post. Conclusion Mathematical models of the abutment made according to the literature data are very similar with the real abutment and the simplifications are minimal. These models enable calculations of stress and deformation of the dental structures. The finite element analysis provides useful information in understanding biomechanical problems and gives guidance for clinical research.
Traffic analysis toolbox volume XI : weather and traffic analysis, modeling and simulation.
2010-12-01
This document presents a weather module for the traffic analysis tools program. It provides traffic engineers, transportation modelers and decisions makers with a guide that can incorporate weather impacts into transportation system analysis and mode...
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
Formal Analysis of Graphical Security Models
DEFF Research Database (Denmark)
Aslanyan, Zaruhi
, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical...... models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing...... formal verification of their properties. Finally, their appealing graphical notations enable to communicate security concerns in an understandable way also to non-experts, often in charge of the decision making. This dissertation argues that automated techniques can be developed on graphical security...
Modeling, analysis, and visualization of anisotropy
Özarslan, Evren; Hotz, Ingrid
2017-01-01
This book focuses on the modeling, processing and visualization of anisotropy, irrespective of the context in which it emerges, using state-of-the-art mathematical tools. As such, it differs substantially from conventional reference works, which are centered on a particular application. It covers the following topics: (i) the geometric structure of tensors, (ii) statistical methods for tensor field processing, (iii) challenges in mapping neural connectivity and structural mechanics, (iv) processing of uncertainty, and (v) visualizing higher-order representations. In addition to original research contributions, it provides insightful reviews. This multidisciplinary book is the sixth in a series that aims to foster scientific exchange between communities employing tensors and other higher-order representations of directionally dependent data. A significant number of the chapters were co-authored by the participants of the workshop titled Multidisciplinary Approaches to Multivalued Data: Modeling, Visualization,...
Analysis of mathematical modelling on potentiometric biosensors.
Mehala, N; Rajendran, L
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.
Modelling Analysis of Sewage Sludge Amended Soil
DEFF Research Database (Denmark)
Sørensen, P. B.; Carlsen, L.; Vikelsøe, J.
the plant effluent. The focus in this work is the top soil as this layer is important for the fate of a xenobiotic substance due to the high biological activity. A simple model for the top soil is used where the substance is assumed homogeneously distributed as suggested in the European Union System......The topic is risk assessment of sludge supply to agricultural soil in relation to xenobiotics. A large variety of xenobiotics arrive to the wastewater treatment plant in the wastewater. Many of these components are hydrophobic and thus will accumulate in the sludge solids and are removed from...... for the Evaluation of Substances (EUSES). It is shown how the fraction of substance mass, which is leached, from the top soil is a simple function of the ratio between the degradation half lifetime and the adsorption coefficient. This model can be used in probabilistic risk assessment of agricultural soils...
Simulation modeling and analysis in safety. II
International Nuclear Information System (INIS)
Ayoub, M.A.
1981-01-01
The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)
Asymptotic analysis of an ion extraction model
International Nuclear Information System (INIS)
Ben Abdallah, N.; Mas-Gallic, S.; Raviart, P.A.
1993-01-01
A simple model for ion extraction from a plasma is analyzed. The order of magnitude of the plasma parameters leads to a singular perturbation problem for a semilinear elliptic equation. We first prove existence of solutions for the perturbed problem and uniqueness under certain conditions. Then we prove the convergence of these solutions, when the parameters go to zero, towards the solution of a Child-Langmuir problem
Spatial analysis and modelling based on activities
CSIR Research Space (South Africa)
Conradie, Dirk CU
2010-01-01
Full Text Available (deliberative attitudes) (Pokahr, 2005). The BDI model does not cover emotional and other ‘higher’ human attitudes. KRONOS is a generic Computational Building Simulation (CBS) tool that was developed over the past three years to work on advanced... featured, stable, mature and platform independent with an easy to use C/C++ Application Program Interface (API). It has advanced joint types and integrated collision detection with friction. ODE is particularly useful for simulating vehicles, objects...
Stochastic modeling and analysis of telecoms networks
Decreusefond, Laurent
2012-01-01
This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an
Automatic terrain modeling using transfinite element analysis
Collier, Nathan
2010-05-31
An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.
PROJECT ACTIVITY ANALYSIS WITHOUT THE NETWORK MODEL
Directory of Open Access Journals (Sweden)
S. Munapo
2012-01-01
Full Text Available
ENGLISH ABSTRACT: This paper presents a new procedure for analysing and managing activity sequences in projects. The new procedure determines critical activities, critical path, start times, free floats, crash limits, and other useful information without the use of the network model. Even though network models have been successfully used in project management so far, there are weaknesses associated with the use. A network is not easy to generate, and dummies that are usually associated with it make the network diagram complex – and dummy activities have no meaning in the original project management problem. The network model for projects can be avoided while still obtaining all the useful information that is required for project management. What are required are the activities, their accurate durations, and their predecessors.
AFRIKAANSE OPSOMMING: Die navorsing beskryf ’n nuwerwetse metode vir die ontleding en bestuur van die sekwensiële aktiwiteite van projekte. Die voorgestelde metode bepaal kritiese aktiwiteite, die kritieke pad, aanvangstye, speling, verhasing, en ander groothede sonder die gebruik van ’n netwerkmodel. Die metode funksioneer bevredigend in die praktyk, en omseil die administratiewe rompslomp van die tradisionele netwerkmodelle.
Dynamic analysis of a parasite population model
Sibona, G. J.; Condat, C. A.
2002-03-01
We study the dynamics of a model that describes the competitive interaction between an invading species (a parasite) and its antibodies in an living being. This model was recently used to examine the dynamical competition between Tripanosoma cruzi and its antibodies during the acute phase of Chagas' disease. Depending on the antibody properties, the model yields three types of outcomes, corresponding, respectively, to healing, chronic disease, and host death. Here, we study the dynamics of the parasite-antibody interaction with the help of simulations, obtaining phase trajectories and phase diagrams for the system. We show that, under certain conditions, the size of the parasite inoculation can be crucial for the infection outcome and that a retardation in the stimulated production of an antibody species may result in the parasite gaining a definitive advantage. We also find a criterion for the relative sizes of the parameters that are required if parasite-generated decoys are indeed to help the invasion. Decoys may also induce a qualitatively different outcome: a limit cycle for the antibody-parasite population phase trajectories.
Modeling and Analysis of Wrinkled Membranes: An Overview
Yang, B.; Ding, H.; Lou, M.; Fang, H.; Broduer, Steve (Technical Monitor)
2001-01-01
Thin-film membranes are basic elements of a variety of space inflatable/deployable structures. Wrinkling degrades the performance and reliability of these membrane structures, and hence has been a topic of continued interest. Wrinkling analysis of membranes for general geometry and arbitrary boundary conditions is quite challenging. The objective of this presentation is two-fold. Firstly, the existing models of wrinkled membranes and related numerical solution methods are reviewed. The important issues to be discussed are the capability of a membrane model to characterize taut, wrinkled and slack states of membranes in a consistent and physically reasonable manner; the ability of a wrinkling analysis method to predict the formation and growth of wrinkled regions, and to determine out-of-plane deformation and wrinkled waves; the convergence of a numerical solution method for wrinkling analysis; and the compatibility of a wrinkling analysis with general-purpose finite element codes. According to this review, several opening issues in modeling and analysis of wrinkled membranes that are to be addressed in future research are summarized, The second objective of this presentation is to discuss a newly developed membrane model of two viable parameters (2-VP model) and associated parametric finite element method (PFEM) for wrinkling analysis are introduced. The innovations and advantages of the proposed membrane model and PFEM-based wrinkling analysis are: (1) Via a unified stress-strain relation; the 2-VP model treat the taut, wrinkled, and slack states of membranes consistently; (2) The PFEM-based wrinkling analysis has guaranteed convergence; (3) The 2-VP model along with PFEM is capable of predicting membrane out-of-plane deformations; and (4) The PFEM can be integrated into any existing finite element code. Preliminary numerical examples are also included in this presentation to demonstrate the 2-VP model and PFEM-based wrinkling analysis approach.
Green, Christopher T.; Böhlke, John Karl; Bekins, Barbara A.; Phillips, Steven P.
2010-01-01
Gradients in contaminant concentrations and isotopic compositions commonly are used to derive reaction parameters for natural attenuation in aquifers. Differences between field‐scale (apparent) estimated reaction rates and isotopic fractionations and local‐scale (intrinsic) effects are poorly understood for complex natural systems. For a heterogeneous alluvial fan aquifer, numerical models and field observations were used to study the effects of physical heterogeneity on reaction parameter estimates. Field measurements included major ions, age tracers, stable isotopes, and dissolved gases. Parameters were estimated for the O2 reduction rate, denitrification rate, O2 threshold for denitrification, and stable N isotope fractionation during denitrification. For multiple geostatistical realizations of the aquifer, inverse modeling was used to establish reactive transport simulations that were consistent with field observations and served as a basis for numerical experiments to compare sample‐based estimates of “apparent” parameters with “true“ (intrinsic) values. For this aquifer, non‐Gaussian dispersion reduced the magnitudes of apparent reaction rates and isotope fractionations to a greater extent than Gaussian mixing alone. Apparent and true rate constants and fractionation parameters can differ by an order of magnitude or more, especially for samples subject to slow transport, long travel times, or rapid reactions. The effect of mixing on apparent N isotope fractionation potentially explains differences between previous laboratory and field estimates. Similarly, predicted effects on apparent O2threshold values for denitrification are consistent with previous reports of higher values in aquifers than in the laboratory. These results show that hydrogeological complexity substantially influences the interpretation and prediction of reactive transport.
Directory of Open Access Journals (Sweden)
Léa Houpert
2018-02-01
Full Text Available Although mixing tree species is considered an efficient risk-reduction strategy in the face of climate change, the conditions where mixtures are more productive than monocultures are under ongoing debate. Generalizations have been difficult because of the variety of methods used and due to contradictory findings regarding the effects of the species investigated, mixing proportions, and many site and stand conditions. Using data from 960 plots of the Swiss National Forest Inventory data, we assessed whether Picea abies (L. Karst–Fagus sylvatica L. mixed stands are more productive than pure stands, and whether the mixing effect depends on site- or stand-characteristics. The species proportions were estimated using species proportion by area, which depends on the maximum stand basal area of an unmanaged stand (BAmax. Four different alternatives were used to estimate BAmax and to investigate the effect of these differing alternatives on the estimated mixture effect. On average, the mixture had a negative effect on the growth of Picea abies. However, this effect decreased as moisture availability increased. Fagus sylvatica grew better in mixtures and this effect increased with site quality. A significant interaction between species proportions and quadratic mean diameter, a proxy for stand age, was found for both species: the older the stand, the better the growth of Fagus sylvatica and the lower the growth of Picea abies. Overyielding was predicted for 80% of the investigated sites. The alternative to estimate BAmax weakly modulated the estimated mixture effect, but it did not affect the way mixing effects changed with site characteristics.
Human Modeling for Ground Processing Human Factors Engineering Analysis
Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim
2011-01-01
There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence
Hutter, Kolumban
2004-01-01
The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.
Integration of Design and Control through Model Analysis
DEFF Research Database (Denmark)
Russel, Boris Mariboe; Henriksen, Jens Peter; Jørgensen, Sten Bay
2002-01-01
A systematic computer aided analysis of the process model is proposed as a pre-solution step for integration of design and control problems. The process model equations are classified in terms of balance equations, constitutive equations and conditional equations. Analysis of the phenomena models...... (structure selection) issues for the integrated problems are considered. (C) 2002 Elsevier Science Ltd. All rights reserved....... representing the constitutive equations identify the relationships between the important process and design variables, which help to understand, define and address some of the issues related to integration of design and control. Furthermore, the analysis is able to identify a set of process (control) variables...
Analysis of Jingdong Mall Logistics Distribution Model
Shao, Kang; Cheng, Feng
In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.
AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS
Energy Technology Data Exchange (ETDEWEB)
Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang
2010-08-01
The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.
Risk analysis: divergent models and convergent interpretations
Carnes, B. A.; Gavrilova, N.
2001-01-01
Material presented at a NASA-sponsored workshop on risk models for exposure conditions relevant to prolonged space flight are described in this paper. Analyses used mortality data from experiments conducted at Argonne National Laboratory on the long-term effects of external whole-body irradiation on B6CF1 mice by 60Co gamma rays and fission neutrons delivered as a single exposure or protracted over either 24 or 60 once-weekly exposures. The maximum dose considered was restricted to 1 Gy for neutrons and 10 Gy for gamma rays. Proportional hazard models were used to investigate the shape of the dose response at these lower doses for deaths caused by solid-tissue tumors and tumors of either connective or epithelial tissue origin. For protracted exposures, a significant mortality effect was detected at a neutron dose of 14 cGy and a gamma-ray dose of 3 Gy. For single exposures, radiation-induced mortality for neutrons also occurred within the range of 10-20 cGy, but dropped to 86 cGy for gamma rays. Plots of risk relative to control estimated for each observed dose gave a visual impression of nonlinearity for both neutrons and gamma rays. At least for solid-tissue tumors, male and female mortality was nearly identical for gamma-ray exposures, but mortality risks for females were higher than for males for neutron exposures. As expected, protracting the gamma-ray dose reduced mortality risks. Although curvature consistent with that observed visually could be detected by a model parameterized to detect curvature, a relative risk term containing only a simple term for total dose was usually sufficient to describe the dose response. Although detectable mortality for the three pathology end points considered typically occurred at the same level of dose, the highest risks were almost always associated with deaths caused by tumors of epithelial tissue origin.
Analysis and modeling of "focus" in context
DEFF Research Database (Denmark)
Hovy, Dirk; Anumanchipalli, Gopala; Parlikar, Alok
2013-01-01
This paper uses a crowd-sourced definition of a speech phenomenon we have called focus. Given sentences, text and speech, in isolation and in context, we asked annotators to identify what we term the focus word. We present their consistency in identifying the focused word, when presented with text...... or speech stimuli. We then build models to show how well we predict that focus word from lexical (and higher) level features. Also, using spectral and prosodic information, we show the differences in these focus words when spoken with and without context. Finally, we show how we can improve speech synthesis...
Early Start DENVER Model: A Meta - analysis
Jane P. Canoy; Helen B. Boholano
2015-01-01
Each child with Autism Spectrum Disorder has different symptoms, skills and types of impairment or disorder with other children. This is why the word “spectrum” is included in this disorder. Eapen, Crncec, and Walter, 2013 claimed that there was an emerging evidence that early interventions gives the greatest capacity of child’s development during their first years of life as “brain plasticity” are high during this period. With this, the only intervention program model for children as young a...
Compartmentalization analysis using discrete fracture network models
Energy Technology Data Exchange (ETDEWEB)
La Pointe, P.R.; Eiben, T.; Dershowitz, W. [Golder Associates, Redmond, VA (United States); Wadleigh, E. [Marathon Oil Co., Midland, TX (United States)
1997-08-01
This paper illustrates how Discrete Fracture Network (DFN) technology can serve as a basis for the calculation of reservoir engineering parameters for the development of fractured reservoirs. It describes the development of quantitative techniques for defining the geometry and volume of structurally controlled compartments. These techniques are based on a combination of stochastic geometry, computational geometry, and graph the theory. The parameters addressed are compartment size, matrix block size and tributary drainage volume. The concept of DFN models is explained and methodologies to compute these parameters are demonstrated.
Directory of Open Access Journals (Sweden)
Ilker Ercanli
2015-01-01
Full Text Available Modelos estadísticos no lineales de efectos mixtos se utilizaron para predecir las relaciones entre la altura total y el diámetro a la altura del pecho (DAP en rodales de árboles de haya oriental (Fagus orientalis Lipsky en Kestel, Bursa, al noroeste de Turquía. Un total de 124 parcelas de muestreo se seleccionaron para representar la calidad de sitio, edad y densidad de rodal. Nueve modelos no lineales generalizados de altura-diámetro se ajustaron y evaluaron con base en el criterio de información de Akaike, el criterio de información bayesiana de Schwarz, la raíz del cuadrado medio del error (RMSE por sus siglas en inglés, el sesgo absoluto y el coeficiente de determinación ajustado (R2adj. El modelo no lineal de Schnute se seleccionó como el mejor modelo predictivo. El modelo de altura-diámetro basado en el enfoque del modelo no lineal de efectos mixtos representó 90.6 % de la varianza total en las relaciones de altura-diámetro y los valores de RMSE de 1.48 m. Varios escenarios que difieren en el diseño de muestreo y el tamaño de los árboles submuestra, seleccionados del conjunto de datos de validación, revelaron que cuatro árboles submuestra seleccionados al azar produjeron los mejores resultados predictivos (reducción de 43.3 % de la suma de errores cuadrados, 98.4 % del sesgo absoluto y 36.9 % de la RMSE en relación con las predicciones de los efectos fijos.
Park, Hyeshin; Schweighofer, Nicolas
2017-01-01
Background We recently showed that individuals with chronic stroke who completed two sessions of intensive unassisted arm reach training exhibited improvements in movement times up to one month post-training. Here, we study whether changes in movement times during training can predict long-term changes. Methods Sixteen participants with chronic stroke and ten non-disabled age-matched participants performed two sessions of reach training with 600 movements per session. Movement time data durin...
New rheological model for concrete structural analysis
International Nuclear Information System (INIS)
Chern, J.C.
1984-01-01
Long time deformation is of interest in estimating stresses of the prestressed concrete reactor vessel, in predicting cracking due to shrinkage or thermal dilatation, and in the design of leak-tight structures. Many interacting influences exist among creep, shrinkage and cracking for concrete. An interaction which researchers have long observed, is that at simultaneous drying and loading, the deformation of a concrete structure under the combined effect is larger than the sum of the shrinkage deformation of the structure at no load and the deformation of the sealed structure. The excess deformation due to the difference between observed test data and conventional analysis is regarded as the Pickett Effect. A constitutive relation explaining the Pickett Effect and other similar superposition problems, which includes creep, shrinkage (or thermal dilation), cracking, aging was developed with an efficient time-step numerical algorithm. The total deformation in the analysis is the sum of strain due to elastic deformation and creep, cracking and shrinkage with thermal dilatation. Instead of a sudden stress reduction to zero after the attainment of the strength limit, the gradual strain-softening of concrete (a gradual decline of stress at increasing strain) is considered
A Conceptual Model for Multidimensional Analysis of Documents
Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles
Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.
Aspects of uncertainty analysis in accident consequence modeling
International Nuclear Information System (INIS)
Travis, C.C.; Hoffman, F.O.
1981-01-01
Mathematical models are frequently used to determine probable dose to man from an accidental release of radionuclides by a nuclear facility. With increased emphasis on the accuracy of these models, the incorporation of uncertainty analysis has become one of the most crucial and sensitive components in evaluating the significance of model predictions. In the present paper, we address three aspects of uncertainty in models used to assess the radiological impact to humans: uncertainties resulting from the natural variability in human biological parameters; the propagation of parameter variability by mathematical models; and comparison of model predictions to observational data
Analysis of a classical chiral bag model
International Nuclear Information System (INIS)
Nadeau, H.
1985-01-01
The author studies a classical chiral bag model with a Mexican hat-type potential for the self-coupling of the pion fields. He assumes a static spherical bag of radius R, the hedgehog ansatz for the chiral fields and that the quarks are all in the lowest lying s state. The author has considered three classes of models, the cloudy or pantopionic bags, the little or exopionic bags and the endopionic bags, where the pions are allowed all through space, only outside the bag and only inside the bag respectively. In all cases, the quarks are confined in the interior. He calculates the bag radius R, the bag constant B and the total ground state energy R for wide ranges of the two free parameters of the theory, namely the coupling constant λ and the quark frequency omega. The author focuses the study on the endopionic bags, the least known class, and compares the results with the familiar ones of other classes
The Spectrophotometric Analysis and Modeling of Sunscreens
Walters, Christina; Keeney, Allen; Wigal, Carl T.; Johnston, Cynthia R.; Cornelius, Richard D.
1997-01-01
Sunscreens and their SPF (Sun Protection Factor) values are the focus of this experiment that includes spectrophotometric measurements and molecular modeling. Students suspend weighed amounts of sunscreen lotions graded SPF 4, 6, 8, 15, 30, and 45 in water and dissolve aliquots of the aqueous suspensions in propanol. The expected relationship of absorbance proportional to log10(SPF) applies at 312 nm where a maximum in absorbance occurs for the sunscreen solutions. Results at 330 nm give similar results and are more accessible using spectrometers routinely available in the introductory laboratory. Sunscreens constitute a suitable class of compounds to use for modeling electronic spectra, and using the computer for the active ingredients ethylhexyl para-methoxycinnamate, oxybenzone, 2-ethylhexyl salicylate, and octocrylene found in commercially available formulations typically predicts the absorption maxima within 10 nm. This experiment lets students explore which compounds have the potential to function as sunscreen agents and thereby see the importance of a knowledge of chemistry to the formulation of household items.
Two models of minimalist, incremental syntactic analysis.
Stabler, Edward P
2013-07-01
Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.
Model based analysis of piezoelectric transformers.
Hemsel, T; Priya, S
2006-12-22
Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components.
Online Statistical Modeling (Regression Analysis) for Independent Responses
Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus
2017-06-01
Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Numerical simulations of gas mixing effect in electron cyclotron resonance ion sources
Directory of Open Access Journals (Sweden)
V. Mironov
2017-01-01
Full Text Available The particle-in-cell Monte Carlo collisions code nam-ecris is used to simulate the electron cyclotron resonance ion source (ECRIS plasma sustained in a mixture of Kr with O_{2}, N_{2}, Ar, Ne, and He. The model assumes that ions are electrostatically confined in the ECR zone by a dip in the plasma potential. A gain in the extracted krypton ion currents is seen for the highest charge states; the gain is maximized when oxygen is used as a mixing gas. The special feature of oxygen is that most of the singly charged oxygen ions are produced after the dissociative ionization of oxygen molecules with a large kinetic energy release of around 5 eV per ion. The increased loss rate of energetic lowly charged ions of the mixing element requires a building up of the retarding potential barrier close to the ECR surface to equilibrate electron and ion losses out of the plasma. In the mixed plasmas, the barrier value is large (∼1 V compared to pure Kr plasma (∼0.01 V, with longer confinement times of krypton ions and with much higher ion temperatures. The temperature of the krypton ions is increased because of extra heating by the energetic oxygen ions and a longer time of ion confinement. In calculations, a drop of the highly charged ion currents of lighter elements is observed when adding small fluxes of krypton into the source. This drop is caused by the accumulation of the krypton ions inside plasma, which decreases the electron and ion confinement times.
Structural model analysis of multiple quantitative traits.
Directory of Open Access Journals (Sweden)
Renhua Li
2006-07-01
Full Text Available We introduce a method for the analysis of multilocus, multitrait genetic data that provides an intuitive and precise characterization of genetic architecture. We show that it is possible to infer the magnitude and direction of causal relationships among multiple correlated phenotypes and illustrate the technique using body composition and bone density data from mouse intercross populations. Using these techniques we are able to distinguish genetic loci that affect adiposity from those that affect overall body size and thus reveal a shortcoming of standardized measures such as body mass index that are widely used in obesity research. The identification of causal networks sheds light on the nature of genetic heterogeneity and pleiotropy in complex genetic systems.
Latent class models in financial data analysis
Directory of Open Access Journals (Sweden)
Attilio Gardini
2007-10-01
Full Text Available This paper deals with optimal international portfolio choice by developing a latent class approach based on the distinction between international and non-international investors. On the basis of micro data, we analyze the effects of many social, demographic, economic and financial characteristics on the probability to be an international investor. Traditional measures of equity home bias do not allow for the existence of international investment rationing operators. On the contrary, by resorting to latent class analysis it is possible to detect the unobservable distinction between international investors and investors who are precluded from operating into international financial markets and, therefore, to evaluate the role of these unobservable constraints on equity home bias.
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Modeling and Exergy Analysis of District Cooling
DEFF Research Database (Denmark)
Nguyen, Chan
in the gas cooler, pinch temperature in the evaporator and effectiveness of the IHX. These results are complemented by the exergy analysis, where the exergy destruction ratio of the CO2 system’s component is found. Heat recovery from vapour compression heat pumps has been investigated. The heat is to be used...... consists of a combined heat and power (CHP) plant with a separate refrigeration plant, where its condenser heat is rejected to the environment. The recovery system consists of the same CHP plant but with a heat pump, where the condensation heat is recovered. Five different refrigerants (R717, R600a, R290...... and surrounding temperature has been carried out. It has been demonstrated that the two methods yield significantly different results. Energy costing prices the unit cost of heating and cooling equally independent of the quality of the heat transfer, and it tends to overprice the cost of cooling in an irrational...
Versatile Micromechanics Model for Multiscale Analysis of Composite Structures
Kwon, Y. W.; Park, M. S.
2013-08-01
A general-purpose micromechanics model was developed so that the model could be applied to various composite materials such as reinforced by particles, long fibers and short fibers as well as those containing micro voids. Additionally, the model can be used with hierarchical composite materials. The micromechanics model can be used to compute effective material properties like elastic moduli, shear moduli, Poisson's ratios, and coefficients of thermal expansion for the various composite materials. The model can also calculate the strains and stresses at the constituent material level such as fibers, particles, and whiskers from the composite level stresses and strains. The model was implemented into ABAQUS using the UMAT option for multiscale analysis. An extensive set of examples are presented to demonstrate the reliability and accuracy of the developed micromechanics model for different kinds of composite materials. Another set of examples is provided to study the multiscale analysis of composite structures.
Analysis of A Virus Dynamics Model
Zhang, Baolin; Li, Jianquan; Li, Jia; Zhao, Xin
2018-03-01
In order to more accurately characterize the virus infection in the host, a virus dynamics model with latency and virulence is established and analyzed in this paper. The positivity and boundedness of the solution are proved. After obtaining the basic reproduction number and the existence of infected equilibrium, the Lyapunov method and the LaSalle invariance principle are used to determine the stability of the uninfected equilibrium and infected equilibrium by constructing appropriate Lyapunov functions. We prove that, when the basic reproduction number does not exceed 1, the uninfected equilibrium is globally stable, the virus can be cleared eventually; when the basic reproduction number is more than 1, the infected equilibrium is globally stable, the virus will persist in the host at a certain level. The effect of virulence and latency on infection is also discussed.
Analysis and modeling of rail maintenance costs
Directory of Open Access Journals (Sweden)
Amir Ali Bakhshi
2012-01-01
Full Text Available Railroad maintenance engineering plays an important role on availability of roads and reducing the cost of railroad incidents. Rail is of the most important parts of railroad industry, which needs regular maintenance since it covers a significant part of total maintenance cost. Any attempt on optimizing total cost of maintenance could substantially reduce the cost of railroad system and it can reduce total cost of the industry. The paper presents a new method to estimate the cost of rail failure using different cost components such as cost of inspection and cost of risk associated with possible accidents. The proposed model of this paper is used for a real-world case study of railroad transportation of Tehran region and the results have been analyzed.
Expatriates Selection: An Essay of Model Analysis
Directory of Open Access Journals (Sweden)
Rui Bártolo-Ribeiro
2015-03-01
Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.
Parametric analysis of fire model CFAST
International Nuclear Information System (INIS)
Lee, Y. H.; Yang, J. Y.; Kim, J. H.
2004-01-01
This paper describes the pump room fire of the nuclear power plant using CFAST fire modeling code developed by NIST. It is determined by the constrained or unconstrained fire, Lower Oxygen Limit (LOL), Radiative Fraction (RF), and the times to open doors, which are the input parameters of CAFST. According to the results, pump room fire is ventilation-controlled fire, so it is adequate that the value of LOL is 10% which is also the default value. It is appeared that the RF does not change the temperature of the upper gas layer. But the level of opening of the penetrating area and the times to opening it have an effect on the temperature of the upper layer, so it is determined that the results of it should be carefully analyzed
Modeling and analysis with induction generators
Simões, M Godoy
2014-01-01
ForewordPrefaceAcknowledgmentsAuthorsPrinciples of Alternative Sources of Energy and Electric GenerationScope of This ChapterLegal DefinitionsPrinciples of Electrical ConversionBasic Definitions of Electrical PowerCharacteristics of Primary SourcesCharacteristics of Remote Industrial, Commercial, and Residential Sites and Rural EnergySelection of the Electric GeneratorInterfacing Primary Source, Generator, and LoadExample of a Simple Integrated Generating and Energy-Storing SystemSolved ProblemsSuggested ProblemsReferencesSteady-State Model of Induction GeneratorsScope of This ChapterInterconnection and Disconnection of the Electric Distribution NetworkRobustness of Induction GeneratorsClassical Steady-State Representation of the Asynchronous MachineGenerated PowerInduced TorqueRepresentation of Induction Generator LossesMeasurement of Induction Generator ParametersBlocked Rotor Test (s = 1)No-Load Test (s = 0)Features of Induction Machines Working as Generators Interconnected to the Distribution NetworkHigh-...
Materials Analysis and Modeling of Underfill Materials.
Energy Technology Data Exchange (ETDEWEB)
Wyatt, Nicholas B [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chambers, Robert S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
The thermal-mechanical properties of three potential underfill candidate materials for PBGA applications are characterized and reported. Two of the materials are a formulations developed at Sandia for underfill applications while the third is a commercial product that utilizes a snap-cure chemistry to drastically reduce cure time. Viscoelastic models were calibrated and fit using the property data collected for one of the Sandia formulated materials. Along with the thermal-mechanical analyses performed, a series of simple bi-material strip tests were conducted to comparatively analyze the relative effects of cure and thermal shrinkage amongst the materials under consideration. Finally, current knowledge gaps as well as questions arising from the present study are identified and a path forward presented.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....
comparative analysis of some existing kinetic models with proposed
African Journals Online (AJOL)
IGNATIUS NWIDI
two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...
Stability Analysis of a Reaction-Diffusion System Modeling Atherogenesis
Ibragimov, Akif; Ritter, Laura; Walton, Jay R.
2010-01-01
This paper presents a linear, asymptotic stability analysis for a reaction-diffusionconvection system modeling atherogenesis, the initiation of atherosclerosis, as an inflammatory instability. Motivated by the disease paradigm articulated by Ross
Modeling, Analysis, Simulation, and Synthesis of Biomolecular Networks
National Research Council Canada - National Science Library
Ruben, Harvey; Kumar, Vijay; Sokolsky, Oleg
2006-01-01
...) a first example of reachability analysis applied to a biomolecular system (lactose induction), 4) a model of tetracycline resistance that discriminates between two possible mechanisms for tetracycline diffusion through the cell membrane, and 5...
Time Aquatic Resources Modeling and Analysis Program (STARMAP)
Federal Laboratory Consortium — Colorado State University has received funding from the U.S. Environmental Protection Agency (EPA) for its Space-Time Aquatic Resources Modeling and Analysis Program...
Analysis of the resolution processes of three modeling tasks
Directory of Open Access Journals (Sweden)
Cèsar Gallart Palau
2017-08-01
Full Text Available In this paper we present a comparative analysis of the resolution process of three modeling tasks performed by secondary education students (13-14 years, designed from three different points of view: The Modelling-eliciting Activities, the LEMA project, and the Realistic Mathematical Problems. The purpose of this analysis is to obtain a methodological characterization of them in order to provide to secondary education teachers a proper selection and sequencing of tasks for their implementation in the classroom.
Neutrosophic Logic for Mental Model Elicitation and Analysis
Directory of Open Access Journals (Sweden)
Karina Pérez-Teruel
2014-03-01
Full Text Available Mental models are personal, internal representations of external reality that people use to interact with the world around them. They are useful in multiple situations such as muticriteria decision making, knowledge management, complex system learning and analysis. In this paper a framework for mental models elicitation and analysis based on neutrosophic Logic is presented. An illustrative example is provided to show the applicability of the proposal. The paper ends with conclusion future research directions.
Trojan detection model based on network behavior analysis
International Nuclear Information System (INIS)
Liu Junrong; Liu Baoxu; Wang Wenjin
2012-01-01
Based on the analysis of existing Trojan detection technology, this paper presents a Trojan detection model based on network behavior analysis. First of all, we abstract description of the Trojan network behavior, then according to certain rules to establish the characteristic behavior library, and then use the support vector machine algorithm to determine whether a Trojan invasion. Finally, through the intrusion detection experiments, shows that this model can effectively detect Trojans. (authors)
Integrated dynamic modeling and management system mission analysis
Energy Technology Data Exchange (ETDEWEB)
Lee, A.K.
1994-12-28
This document summarizes the mission analysis performed on the Integrated Dynamic Modeling and Management System (IDMMS). The IDMMS will be developed to provide the modeling and analysis capability required to understand the TWRS system behavior in terms of the identified TWRS performance measures. The IDMMS will be used to demonstrate in a verified and validated manner the satisfactory performance of the TWRS system configuration and assurance that the requirements have been satisfied.
Building Information Modeling (BIM) for Indoor Environmental Performance Analysis
DEFF Research Database (Denmark)
The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....
Integrated dynamic modeling and management system mission analysis
International Nuclear Information System (INIS)
Lee, A.K.
1994-01-01
This document summarizes the mission analysis performed on the Integrated Dynamic Modeling and Management System (IDMMS). The IDMMS will be developed to provide the modeling and analysis capability required to understand the TWRS system behavior in terms of the identified TWRS performance measures. The IDMMS will be used to demonstrate in a verified and validated manner the satisfactory performance of the TWRS system configuration and assurance that the requirements have been satisfied
Coping with Complexity Model Reduction and Data Analysis
Gorban, Alexander N
2011-01-01
This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.
Evaluation of Cost Models and Needs & Gaps Analysis
DEFF Research Database (Denmark)
Kejser, Ulla Bøgvad
2014-01-01
they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...
Similar words analysis based on POS-CBOW language model
Directory of Open Access Journals (Sweden)
Dongru RUAN
2015-10-01
Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR
Directory of Open Access Journals (Sweden)
Philippe Lopez
2011-05-01
Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.
Model Construction and Analysis of Respiration in Halobacterium salinarum.
Directory of Open Access Journals (Sweden)
Cherryl O Talaue
Full Text Available The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.
Model Construction and Analysis of Respiration in Halobacterium salinarum.
Talaue, Cherryl O; del Rosario, Ricardo C H; Pfeiffer, Friedhelm; Mendoza, Eduardo R; Oesterhelt, Dieter
2016-01-01
The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.
Folding model analysis of alpha radioactivity
International Nuclear Information System (INIS)
Basu, D N
2003-01-01
Radioactive decay of nuclei via emission of α-particles has been studied theoretically in the framework of a superasymmetric fission model using the double folding (DF) procedure for obtaining the α-nucleus interaction potential. The DF nuclear potential has been obtained by folding in the density distribution functions of the α nucleus and the daughter nucleus with a realistic effective interaction. The M3Y effective interaction has been used for calculating the nuclear interaction potential which has been supplemented by a zero-range pseudo-potential for exchange along with the density dependence. The nuclear microscopic α-nucleus potential thus obtained has been used along with the Coulomb interaction potential to calculate the action integral within the WKB approximation. This subsequently yields calculations for the half-lives of α decays of nuclei. The density dependence and the exchange effects have not been found to be very significant. These calculations provide reasonable estimates for the lifetimes of α-radioactivity of nuclei
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
Topic Modeling in Sentiment Analysis: A Systematic Review
Directory of Open Access Journals (Sweden)
Toqir Ahmad Rana
2016-06-01
Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.
Mixed waste treatment model: Basis and analysis
International Nuclear Information System (INIS)
Palmer, B.A.
1995-09-01
The Department of Energy's Programmatic Environmental Impact Statement (PEIS) required treatment system capacities for risk and cost calculation. Los Alamos was tasked with providing these capacities to the PEIS team. This involved understanding the Department of Energy (DOE) Complex waste, making the necessary changes to correct for problems, categorizing the waste for treatment, and determining the treatment system requirements. The treatment system requirements depended on the incoming waste, which varied for each PEIS case. The treatment system requirements also depended on the type of treatment that was desired. Because different groups contributing to the PEIS needed specific types of results, we provided the treatment system requirements in a variety of forms. In total, some 40 data files were created for the TRU cases, and for the MLLW case, there were 105 separate data files. Each data file represents one treatment case consisting of the selected waste from various sites, a selected treatment system, and the reporting requirements for such a case. The treatment system requirements in their most basic form are the treatment process rates for unit operations in the desired treatment system, based on a 10-year working life and 20-year accumulation of the waste. These results were reported in cubic meters and for the MLLW case, in kilograms as well. The treatment system model consisted of unit operations that are linked together. Each unit operation's function depended on the input waste streams, waste matrix, and contaminants. Each unit operation outputs one or more waste streams whose matrix, contaminants, and volume/mass may have changed as a result of the treatment. These output streams are then routed to the appropriate unit operation for additional treatment until the output waste stream meets the treatment requirements for disposal. The total waste for each unit operation was calculated as well as the waste for each matrix treated by the unit
Domain Endurants: An Analysis and Description Process Model
DEFF Research Database (Denmark)
Bjørner, Dines
2014-01-01
We present a summary, Sect. 2, of a structure of domain analysis and description concepts: techniques and tools. And we link, in Sect. 3, these concepts, embodied in domain analysis prompts and domain description prompts, in a model of how a diligent domain analyser cum describer would use them. We...
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
Representing the Past by Solid Modeling + Golden Ratio Analysis
Ding, Suining
2008-01-01
This paper describes the procedures of reconstructing ancient architecture using solid modeling with geometric analysis, especially the Golden Ratio analysis. In the past the recovery and reconstruction of ruins required bringing together fragments of evidence and vast amount of measurements from archaeological site. Although researchers and…
NMR and modelling techniques in structural and conformation analysis
Energy Technology Data Exchange (ETDEWEB)
Abraham, R J [Liverpool Univ. (United Kingdom)
1994-12-31
The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.
Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.
Energy Technology Data Exchange (ETDEWEB)
Noonan, Nicholas James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-07-01
This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Stability Analysis for Car Following Model Based on Control Theory
International Nuclear Information System (INIS)
Meng Xiang-Pei; Li Zhi-Peng; Ge Hong-Xia
2014-01-01
Stability analysis is one of the key issues in car-following theory. The stability analysis with Lyapunov function for the two velocity difference car-following model (for short, TVDM) is conducted and the control method to suppress traffic congestion is introduced. Numerical simulations are given and results are consistent with the theoretical analysis. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Development of local TDC model in core thermal hydraulic analysis
International Nuclear Information System (INIS)
Kwon, H.S.; Park, J.R.; Hwang, D.H.; Lee, S.K.
2004-01-01
The local TDC model consisting of natural mixing and forced mixing part was developed to obtain more realistic local fluid properties in the core subchannel analysis. To evaluate the performance of local TDC model, the CHF prediction capability was tested with the various CHF correlations and local fluid properties at CHF location which are based on the local TDC model. The results show that the standard deviation of measured to predicted CHF ratio (M/P) based on local TDC model can be reduced by about 7% compared to those based on global TDC model when the CHF correlation has no term to account for distance from the spacer grid. (author)
Practical Soil-Shallow Foundation Model for Nonlinear Structural Analysis
Directory of Open Access Journals (Sweden)
Moussa Leblouba
2016-01-01
Full Text Available Soil-shallow foundation interaction models that are incorporated into most structural analysis programs generally lack accuracy and efficiency or neglect some aspects of foundation behavior. For instance, soil-shallow foundation systems have been observed to show both small and large loops under increasing amplitude load reversals. This paper presents a practical macroelement model for soil-shallow foundation system and its stability under simultaneous horizontal and vertical loads. The model comprises three spring elements: nonlinear horizontal, nonlinear rotational, and linear vertical springs. The proposed macroelement model was verified using experimental test results from large-scale model foundations subjected to small and large cyclic loading cases.
Development of interpretation models for PFN uranium log analysis
International Nuclear Information System (INIS)
Barnard, R.W.
1980-11-01
This report presents the models for interpretation of borehole logs for the PFN (Prompt Fission Neutron) uranium logging system. Two models have been developed, the counts-ratio model and the counts/dieaway model. Both are empirically developed, but can be related to the theoretical bases for PFN analysis. The models try to correct for the effects of external factors (such as probe or formation parameters) in the calculation of uranium grade. The theoretical bases and calculational techniques for estimating uranium concentration from raw PFN data and other parameters are discussed. Examples and discussions of borehole logs are included
Hidden-Markov-Model Analysis Of Telemanipulator Data
Hannaford, Blake; Lee, Paul
1991-01-01
Mathematical model and procedure based on hidden-Markov-model concept undergoing development for use in analysis and prediction of outputs of force and torque sensors of telerobotic manipulators. In model, overall task broken down into subgoals, and transition probabilities encode ease with which operator completes each subgoal. Process portion of model encodes task-sequence/subgoal structure, and probability-density functions for forces and torques associated with each state of manipulation encode sensor signals that one expects to observe at subgoal. Parameters of model constructed from engineering knowledge of task.
Standard model for safety analysis report of fuel fabrication plants
International Nuclear Information System (INIS)
1980-09-01
A standard model for a safety analysis report of fuel fabrication plants is established. This model shows the presentation format, the origin, and the details of the minimal information required by CNEN (Comissao Nacional de Energia Nuclear) aiming to evaluate the requests of construction permits and operation licenses made according to the legislation in force. (E.G.) [pt
Standard model for safety analysis report of fuel reprocessing plants
International Nuclear Information System (INIS)
1979-12-01
A standard model for a safety analysis report of fuel reprocessing plants is established. This model shows the presentation format, the origin, and the details of the minimal information required by CNEN (Comissao Nacional de Energia Nuclear) aiming to evaluate the requests of construction permits and operation licenses made according to the legislation in force. (E.G.) [pt
Stochastic processes analysis in nuclear reactor using ARMA models
International Nuclear Information System (INIS)
Zavaljevski, N.
1990-01-01
The analysis of ARMA model derived from general stochastic state equations of nuclear reactor is given. The dependence of ARMA model parameters on the main physical characteristics of RB nuclear reactor in Vinca is presented. Preliminary identification results are presented, observed discrepancies between theory and experiment are explained and the possibilities of identification improvement are anticipated. (author)
Numerical equilibrium analysis for structured consumer resource models
de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.
2010-01-01
In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured re- source. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries
Numerical equilibrium analysis for structured consumer resource models
de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.
2010-01-01
In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries
A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems
DEFF Research Database (Denmark)
Han, Pujie; Zhai, Zhengjun; Nielsen, Brian
2018-01-01
This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...