WorldWideScience

Sample records for anova models application

  1. On testing variance components in ANOVA models

    OpenAIRE

    Hartung, Joachim; Knapp, Guido

    2000-01-01

    In this paper we derive asymptotic x 2 - tests for general linear hypotheses on variance components using repeated variance components models. In two examples, the two-way nested classification model and the two-way crossed classification model with interaction, we explicitly investigate the properties of the asymptotic tests in small sample sizes.

  2. Factor selection and structural identification in the interaction ANOVA model.

    Science.gov (United States)

    Post, Justin B; Bondell, Howard D

    2013-03-01

    When faced with categorical predictors and a continuous response, the objective of an analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times, these tasks are done separately using Analysis of Variance (ANOVA) followed by a post hoc hypothesis testing procedure such as Tukey's Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also to equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This article introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example.

  3. Smoothing spline ANOVA frailty model for recurrent event data.

    Science.gov (United States)

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.

  4. Analysis of variance (ANOVA) models in lower extremity wounds.

    Science.gov (United States)

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  5. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    Science.gov (United States)

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  6. An introduction to (smoothing spline) ANOVA models in RKHS with examples in geographical data, medicine, atmospheric science and machine learning

    OpenAIRE

    Wahba, Grace

    2004-01-01

    Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces (RKHS) provide a very general framework for data analysis, modeling and learning in a variety of fields. Discrete, noisy scattered, direct and indirect observations can be accommodated with multiple inputs and multiple possibly correlated outputs and a variety of meaningful structures. The purpose of this paper is to give a brief overview of the approach and describe and contrast a series of applications, while noti...

  7. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  8. Predicting Reading Proficiency in Multilevel Models: An ANOVA-Like Approach of Interpreting Effects

    Science.gov (United States)

    Subedi, Bidya Raj

    2007-01-01

    This study used an analysis of variance (ANOVA)-like approach to predict reading proficiency with student, teacher, and school-level predictors based on a 3-level hierarchical generalized linear model (HGLM) analysis. National Assessment of Educational Progress (NAEP) 2000 reading data for 4th graders sampled from 46 states of the United States of…

  9. Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA

    KAUST Repository

    Jung, Yoonsuh

    2014-10-02

    In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.

  10. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    Science.gov (United States)

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  11. Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat

    Directory of Open Access Journals (Sweden)

    Philip W. Iversen

    2004-12-01

    Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.

  12. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  13. Assessing and Evaluating UBT Model of Student Management Information System using ANOVA

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2016-08-01

    Full Text Available The research study focus is set in assessing the efficiency and impact of Student Management Information System in facilitating students during registration and examination periods in the aspect of efficiency, performance and usability. As Case Study is chosen University for Business and Technology-UBT, where the system has been designed and tailored to, implemented, tested, evaluated and re-engineered. The analyses tries to identify whether the developed UBT model shows improvement in student services, data centralization, data security, and the entire process of student eservices. Through the Case Study investigated several impacting factors. Afterwards evaluated the usability and user-friendliness of the developed UBT model and solutions of Management Information System and used ANOVA regression analyses to determine the impact. Insights and recommendations are provided.

  14. Generalized F test and generalized deviance test in two-way ANOVA models for randomized trials.

    Science.gov (United States)

    Shen, Juan; He, Xuming

    2014-01-01

    We consider the problem of detecting treatment effects in a randomized trial in the presence of an additional covariate. By reexpressing a two-way analysis of variance (ANOVA) model in a logistic regression framework, we derive generalized F tests and generalized deviance tests, which provide better power in detecting common location-scale changes of treatment outcomes than the classical F test. The null distributions of the test statistics are independent of the nuisance parameters in the models, so the critical values can be easily determined by Monte Carlo methods. We use simulation studies to demonstrate how the proposed tests perform compared with the classical F test. We also use data from a clinical study to illustrate possible savings in sample sizes.

  15. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    Science.gov (United States)

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

  16. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    Directory of Open Access Journals (Sweden)

    Hannes Matuschek

    Full Text Available The Smoothing Spline ANOVA (SS-ANOVA requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

  17. Tests for ANOVA models with a combination of crossed and nested designs under heteroscedasticity

    Science.gov (United States)

    Xu, Liwen; Tian, Maozai

    2016-06-01

    In this article we consider unbalanced ANOVA models with a combination of crossed and nested designs under heteroscedasticity. For the problem of testing no nested interaction effects, we propose two tests based on a parametric bootstrap (PB) approach and a generalized p-value approach, respectively. The PB test does not depend on the chosen weights used to define the parameters uniquely. These two tests are compared through their simulated Type I error rates and powers. The simulations indicate that the PB test outperforms the generalized p-value test. The PB test performs very satisfactorily even for extensive cases of samples while the generalized p-value test has Type I error rates much less than the nominal level most of the time. Both tests exhibit similar power properties provided the Type I error rates are close to each other. In some cases, the GF test appears to be more powerful than the PB tests because of its inflated Type I error rates.

  18. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Science.gov (United States)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  19. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    R. Wetzels; R.P.P.P. Grasman; E.J. Wagenmakers

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA desig

  20. Application of Anova on Fly Ash Leaching Kinetics for Value Addition

    Science.gov (United States)

    Swain, Ranjita; Mohapatro, Rudra Narayana; Bhima Rao, Raghupatruni

    2016-04-01

    Fly ash is a major problem in power plant sectors as it is dumped at the plant site. Fly ash generation increases day to day due to rapid growth of steel industries. Ceramic/refractory industries are growing rapidly because of more number of steel industries. The natural resources of the ceramic/refractory raw materials are depleting with time due to its consumption. In view of this, fly ash from thermal power plant has been identified for use in the ceramic/refractory industries after suitable beneficiation. In this paper, sample was collected from the ash pond of Vedanta. Particle size (d80 passing size) of the sample is around 150 micron. The chemical analysis of the sample shows that 3.9 % of Fe2O3 and CaO is more than 10 %. XRD patterns show that the fly ash samples consist predominantly of the crystalline phases of quartz, hematite and magnetite in a matrix of aluminosilicate glass. Leaching of iron oxide is 98.3 % at 3 M HCl concentration at 90 °C for 270 min of leaching time. Kinetic study on leaching experiment was carried out. ANOVA software is utilized for curve fitting and the process is optimized using MATLAB 7.1. The detailed study of properties for ceramic material is compared with the standard ceramic materials. The product contains 0.3 % of iron. The other properties of the product have established the fact that the product obtained can be a raw material for ceramic industries.

  1. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges.

    Science.gov (United States)

    Macey, Paul M; Schluter, Philip J; Macey, Katherine E; Harper, Ronald M

    2016-01-01

    We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA). The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI) volumes-of-interest (VOI) during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function "lme". Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex response patterns. PMID

  2. Simultaneous Optimality of LSE and ANOVA Estimate in General Mixed Models

    Institute of Scientific and Technical Information of China (English)

    Mi Xia WU; Song Gui WANG; Kai Fun YU

    2008-01-01

    Problems of the simultaneous optimal estimates and the optimal tests in general mixed models are considered.A necessary and sufficient condition is presented for the least squares estimate of the fixed effects and the analysis of variance (Hendreson III's) estimate of variance components being uniformly minimum variance unbiased estimates simultaneously.This result can be applied to the problems of finding uniformly optimal unbiased tests and uniformly most accurate unbiased confidential interval on parameters of interest,and for finding equivalences of several common estimates of variance components.

  3. Introducing ANOVA and ANCOVA a GLM approach

    CERN Document Server

    Rutherford, Andrew

    2000-01-01

    Traditional approaches to ANOVA and ANCOVA are now being replaced by a General Linear Modeling (GLM) approach. This book begins with a brief history of the separate development of ANOVA and regression analyses and demonstrates how both analysis forms are subsumed by the General Linear Model. A simple single independent factor ANOVA is analysed first in conventional terms and then again in GLM terms to illustrate the two approaches. The text then goes on to cover the main designs, both independent and related ANOVA and ANCOVA, single and multi-factor designs. The conventional statistical assumptions underlying ANOVA and ANCOVA are detailed and given expression in GLM terms. Alternatives to traditional ANCO

  4. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology

    OpenAIRE

    Lazic Stanley E

    2008-01-01

    Abstract Background Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simple...

  5. Are droughts occurrence and severity aggravating? A study on SPI drought class transitions using loglinear models and ANOVA-like inference

    Directory of Open Access Journals (Sweden)

    E. E. Moreira

    2011-12-01

    Full Text Available Long time series (95 to 135 yr of the Standardized Precipitation Index (SPI computed with the 12-month time scale relative to 10 locations across Portugal were studied with the aim of investigating if drought frequency and severity are changing through time. Considering four drought severity classes, time series of drought class transitions were computed and later divided into 4 or 5 sub-periods according to length of time series. Drought class transitions were calculated to form a 2-dimensional contingency table for each period. Two-dimensional loglinear models were fitted to these contingency tables and an ANOVA-like inference was then performed in order to investigate differences relative to drought class transitions among those sub-periods, which were considered as treatments of only one factor. The application of ANOVA-like inference to these data allowed to compare the four or five sub-periods in terms of probabilities of transition between drought classes, which were used to detect a possible trend in time evolution of droughts frequency and severity that could be related to climate change. Results for a number of locations show some similarity between the first, third and fifth period (or the second and the fourth if there were only 4 sub-periods regarding the persistency of severe/extreme and sometimes moderate droughts. In global terms, results do not support the assumption of a trend for progressive aggravation of droughts occurrence during the last century, but rather suggest the existence of long duration cycles.

  6. ANOVA and ANCOVA A GLM Approach

    CERN Document Server

    Rutherford, Andrew

    2012-01-01

    Provides an in-depth treatment of ANOVA and ANCOVA techniques from a linear model perspective ANOVA and ANCOVA: A GLM Approach provides a contemporary look at the general linear model (GLM) approach to the analysis of variance (ANOVA) of one- and two-factor psychological experiments. With its organized and comprehensive presentation, the book successfully guides readers through conventional statistical concepts and how to interpret them in GLM terms, treating the main single- and multi-factor designs as they relate to ANOVA and ANCOVA. The book begins with a brief history of the separate dev

  7. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology

    Directory of Open Access Journals (Sweden)

    Lazic Stanley E

    2008-07-01

    Full Text Available Abstract Background Analysis of variance (ANOVA is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. Results An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159. This will be true in general, and the reasons for this are discussed. Conclusion There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.

  8. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Paul M. Macey

    2016-07-01

    Full Text Available We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA. The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI volumes-of-interest (VOI during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function “lme”. Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex

  9. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  10. Teaching Principles of Inference with ANOVA

    Science.gov (United States)

    Tarlow, Kevin R.

    2016-01-01

    Analysis of variance (ANOVA) is a test of "mean" differences, but the reference to "variances" in the name is often overlooked. Classroom activities are presented to illustrate how ANOVA works with emphasis on how to think critically about inferential reasoning.

  11. 混合效应模型下ANOVA估计和SD估计相等的充分条件%Some Sufficient Conditions for the Identity of ANOVA Estimator and SD Estimator in Mixed-Effects Models

    Institute of Scientific and Technical Information of China (English)

    吴密霞; 赵延

    2014-01-01

    混合效应模型是统计模型中非常重要的一类模型,广泛地应用到许多领域.本文比较了该模型下方差分量的两种估计:方差分析(ANOVA)估计和谱分解(SD)估计,借助吴密霞和王松桂[A new method of spectral decomposition of covariance matrix in mixed effects models and its applications,Sci.China,Ser.A,2005,48:1451-1464]协方差矩阵的谱分解结果,给出了ANOVA估计和SD估计相等的两个充分条件及其相应的统计性质,并将以上的结果应用于圆形部件数据模型和混合方差分析模型.

  12. Foliar Sprays of Citric Acid and Malic Acid Modify Growth, Flowering, and Root to Shoot Ratio of Gazania (Gazania rigens L.: A Comparative Analysis by ANOVA and Structural Equations Modeling

    Directory of Open Access Journals (Sweden)

    Majid Talebi

    2014-01-01

    Full Text Available Foliar application of two levels of citric acid and malic acid (100 or 300 mg L−1 was investigated on flower stem height, plant height, flower performance and yield indices (fresh yield, dry yield and root to shoot ratio of Gazania. Distilled water was applied as control treatment. Multivariate analysis revealed that while the experimental treatments had no significant effect on fresh weight and the flower count, the plant dry weight was significantly increased by 300 mg L−1 malic acid. Citric acid at 100 and 300 mg L−1 and 300 mg L−1 malic acid increased the root fresh weight significantly. Both the plant height and peduncle length were significantly increased in all applied levels of citric acid and malic acid. The display time of flowers on the plant increased in all treatments compared to control treatment. The root to shoot ratio was increased significantly in 300 mg L−1 citric acid compared to all other treatments. These findings confirm earlier reports that citric acid and malic acid as environmentally sound chemicals are effective on various aspects of growth and development of crops. Structural equations modeling is used in parallel to ANOVA to conclude the factor effects and the possible path of effects.

  13. Sequential experimental design based generalised ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  14. 关于谱分解估计和方差分析估计在线性混合模型中的比较%On Comparison of Spectral Decomposition Estimate and ANOVAE in Linear Mixed Model

    Institute of Scientific and Technical Information of China (English)

    史建红

    2003-01-01

    在文献[1]中提出的谱分解估计是一种在线性混合模型中同时估计固定效应和方差分量的新方法.在本文中,我们对带有两个方差分量的线性混合模型进行了谱分解估计和方差分析估计的比较.得出了方差分量的这两种估计在某些条件下方差相等,而且谱分解估计具有一些方差分析估计的最优性.%The spectral decomposition estimate(SDE) proposed by Wang and Yin[1] is a new method of simultaneously estimating fixed effects and variance components in linear mixed models. In this paper,we compare the SDE with ANOVAE in the linear mixed models with two variance components. Our results show that these two estimates of variance components have equal variance under some conditions. Thus the SDE shares some optimalities of the ANOVAE.

  15. ANOVA for the behavioral sciences researcher

    CERN Document Server

    Cardinal, Rudolf N

    2013-01-01

    This new book provides a theoretical and practical guide to analysis of variance (ANOVA) for those who have not had a formal course in this technique, but need to use this analysis as part of their research.From their experience in teaching this material and applying it to research problems, the authors have created a summary of the statistical theory underlying ANOVA, together with important issues, guidance, practical methods, references, and hints about using statistical software. These have been organized so that the student can learn the logic of the analytical techniques but also use the

  16. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  17. Multiple comparison analysis testing in ANOVA.

    Science.gov (United States)

    McHugh, Mary L

    2011-01-01

    The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.

  18. ANOVA like analysis of cancer death age

    Science.gov (United States)

    Areia, Aníbal; Mexia, João T.

    2016-06-01

    We use ANOVA to study the influence of year, sex, country and location on the average cancer death age. The data used was from the World Health Organization (WHO) files for 1999, 2003, 2007 and 2011. The locations considered were: kidney, leukaemia, melanoma of skin and oesophagus and the countries: Portugal, Norway, Greece and Romania.

  19. Detection of epigenetic changes using ANOVA with spatially varying coefficients.

    Science.gov (United States)

    Guanghua, Xiao; Xinlei, Wang; Quincey, LaPlant; Nestler, Eric J; Xie, Yang

    2013-03-13

    Identification of genome-wide epigenetic changes, the stable changes in gene function without a change in DNA sequence, under various conditions plays an important role in biomedical research. High-throughput epigenetic experiments are useful tools to measure genome-wide epigenetic changes, but the measured intensity levels from these high-resolution genome-wide epigenetic profiling data are often spatially correlated with high noise levels. In addition, it is challenging to detect genome-wide epigenetic changes across multiple conditions, so efficient statistical methodology development is needed for this purpose. In this study, we consider ANOVA models with spatially varying coefficients, combined with a hierarchical Bayesian approach, to explicitly model spatial correlation caused by location-dependent biological effects (i.e., epigenetic changes) and borrow strength among neighboring probes to compare epigenetic changes across multiple conditions. Through simulation studies and applications in drug addiction and depression datasets, we find that our approach compares favorably with competing methods; it is more efficient in estimation and more effective in detecting epigenetic changes. In addition, it can provide biologically meaningful results.

  20. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    G. Zwanenburg; H.C.J. Hoefsloot; J.A. Westerhuis; J.J. Jansen; A.K. Smilde

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  1. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Science.gov (United States)

    Liao, Qifeng; Lin, Guang

    2016-07-01

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  2. Linear Bayes estimator of parameters and its superiorities for one-way ANOVA model%单向分类方差分析模型中参数的Bayes估计及其优良性

    Institute of Scientific and Technical Information of China (English)

    童楠; 韦来生

    2008-01-01

    对平衡的单向分类方差分析(ANOVA)模型导出了效应参数向量可估函数的线性Bayes无偏估计(LBUE),并在均方误差矩阵(MSEM)准则、predictive Pitman closeness(PRPC)准则和posterior Pitman closeness(PPC)准则下分别讨论了它相对于最小二乘估计(LSE)的优良性.

  3. A marginal-mean ANOVA approach for analyzing multireader multicase radiological imaging data.

    Science.gov (United States)

    Hillis, Stephen L

    2014-01-30

    The correlated-error ANOVA method proposed by Obuchowski and Rockette (OR) has been a useful procedure for analyzing reader-performance outcomes, such as the area under the receiver-operating-characteristic curve, resulting from multireader multicase radiological imaging data. This approach, however, has only been formally derived for the test-by-reader-by-case factorial study design. In this paper, I show that the OR model can be viewed as a marginal-mean ANOVA model. Viewing the OR model within this marginal-mean ANOVA framework is the basis for the marginal-mean ANOVA approach, the topic of this paper. This approach (1) provides an intuitive motivation for the OR model, including its covariance-parameter constraints; (2) provides easy derivations of OR test statistics and parameter estimates, as well as their distributions and confidence intervals; and (3) allows for easy generalization of the OR procedure to other study designs. In particular, I show how one can easily derive OR-type analysis formulas for any balanced study design by following an algorithm that only requires an understanding of conventional ANOVA methods.

  4. Tests of Linear Hypotheses in the ANOVA under Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    Jin-Ting Zhang

    2013-05-01

    Full Text Available It is often interest to undertake a general linear hypothesis testing (GLHTproblem in the one-way  ANOVA without assuming the equality of thegroup variances. When the equality of the group variances is valid,it is well known that the GLHT problem can be solved by the classical F-test. The classical F-test, however,  may  lead to misleading conclusions when the variance homogeneity assumption is seriously violated since it doesnot take the group variance heteroscedasticity into account. To ourknowledge, little work has been done for this heteroscedastic GLHTproblem  except for some special cases. In this paper, we propose asimple approximate Hotelling T2 (AHT test.  We show that the AHTtest is invariant under affine-transformations, different choices ofthe coefficient matrix used to define the same hypothesis, anddifferent labeling schemes of the group means. Simulations and realdata applications indicate that the AHT test is comparable with oroutperforms some well-known approximate solutions proposed for the k-sample Behrens-Fisher problem which is a special case of theheteroscedastic GLHT problem.

  5. Discovering gene expression patterns in time course microarray experiments by ANOVA-SCA.

    NARCIS (Netherlands)

    M.J. Nueda; A. Conessa; J.A. Westerhuis; H.C.J. Hoefsloot; A.K. Smilde; M. Talon; A. Ferrer

    2007-01-01

    In this work, we develop the application of the Analysis of variance-simultaneous component analysis (ANOVA-SCA) Smilde et al. Bioinformatics, (2005) to the analysis of multiple series time course microarray data as an example of multifactorial gene expression profiling experiments. We denoted this

  6. Are multiple contrast tests superior to the ANOVA?

    Science.gov (United States)

    Konietschke, Frank; Bösiger, Sandra; Brunner, Edgar; Hothorn, Ludwig A

    2013-08-01

    Multiple contrast tests can be used to test arbitrary linear hypotheses by providing local and global test decisions as well as simultaneous confidence intervals. The ANOVA-F-test on the contrary can be used to test the global null hypothesis of no treatment effect. Thus, multiple contrast tests provide more information than the analysis of variance (ANOVA) by offering which levels cause the significance. We compare the exact powers of the ANOVA-F-test and multiple contrast tests to reject the global null hypothesis. Hereby, we compute their least favorable configurations (LFCs). It turns out that both procedures have the same LFCs under certain conditions. Exact power investigations show that their powers are equal to detect their LFCs.

  7. Foliar Sprays of Citric Acid and Malic Acid Modify Growth, Flowering, and Root to Shoot Ratio of Gazania (Gazania rigens L.): A Comparative Analysis by ANOVA and Structural Equations Modeling

    OpenAIRE

    Majid Talebi; Ebrahim Hadavi; Nima Jaafari

    2014-01-01

    Foliar application of two levels of citric acid and malic acid (100 or 300 mg L−1) was investigated on flower stem height, plant height, flower performance and yield indices (fresh yield, dry yield and root to shoot ratio) of Gazania. Distilled water was applied as control treatment. Multivariate analysis revealed that while the experimental treatments had no significant effect on fresh weight and the flower count, the plant dry weight was significantly increased by 300 mg L−1 malic acid. Cit...

  8. GENERALIZED CONFIDENCE REGIONS OF FIXED EFFECTS IN THE TWO-WAY ANOVA

    Institute of Scientific and Technical Information of China (English)

    Weiyan MU; Shifeng XIONG; Xingzhong XU

    2008-01-01

    The authors discuss the unbalanced two-way ANOVA model under heteroscedasticity. By taking the generalized approach, the authors derive the generalized p-values for testing the equality of fixed effects and the generalized confidence regions for these effects. The authors also provide their frequentist properties in large-sample cases. Simulation studies show that the generalized confidence regions have good coverage probabilities.

  9. Empirical Likelihood-Based ANOVA for Trimmed Means

    Science.gov (United States)

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  10. Model Theory and Applications

    CERN Document Server

    Mangani, P

    2011-01-01

    This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.

  11. Characterization of near-infrared spectral variance in the authentication of skim and nonfat dry milk powder collection using ANOVA-PCA, pooled-ANOVA, and partial least-squares regression.

    Science.gov (United States)

    Harnly, James M; Harrington, Peter B; Botros, Lucy L; Jablonski, Joseph; Chang, Claire; Bergana, Marti Mamula; Wehling, Paul; Downey, Gerard; Potts, Alan R; Moore, Jeffrey C

    2014-08-13

    Forty-one samples of skim milk powder (SMP) and nonfat dry milk (NFDM) from 8 suppliers, 13 production sites, and 3 processing temperatures were analyzed by NIR diffuse reflectance spectrometry over a period of 3 days. NIR reflectance spectra (1700-2500 nm) were converted to pseudoabsorbance and examined using (a) analysis of variance-principal component analysis (ANOVA-PCA), (b) pooled-ANOVA based on data submatrices, and (c) partial least-squares regression (PLSR) coupled with pooled-ANOVA. ANOVA-PCA score plots showed clear separation of the samples with respect to milk class (SMP or NFDM), day of analysis, production site, processing temperature, and individual samples. Pooled-ANOVA provided statistical levels of significance for the separation of the averages, some of which were many orders of magnitude below 10⁻³. PLSR showed that the correlation with Certificate of Analysis (COA) concentrations varied from a weak coefficient of determination (R²) of 0.32 for moisture to moderate R² values of 0.61 for fat and 0.78 for protein for this multinational study. In this study, pooled-ANOVA was applied for the first time to PLS modeling and demonstrated that even though the calibration models may not be precise, the contribution of the protein peaks in the NIR spectra accounted for the largest proportion of the variation despite the inherent imprecision of the COA values.

  12. One-way ANOVA based on interval information

    Science.gov (United States)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  13. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed and sel...

  14. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    Science.gov (United States)

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  15. A hybrid anchored-ANOVA - POD/Kriging method for uncertainty quantification in unsteady high-fidelity CFD simulations

    Science.gov (United States)

    Margheri, Luca; Sagaut, Pierre

    2016-11-01

    To significantly increase the contribution of numerical computational fluid dynamics (CFD) simulation for risk assessment and decision making, it is important to quantitatively measure the impact of uncertainties to assess the reliability and robustness of the results. As unsteady high-fidelity CFD simulations are becoming the standard for industrial applications, reducing the number of required samples to perform sensitivity (SA) and uncertainty quantification (UQ) analysis is an actual engineering challenge. The novel approach presented in this paper is based on an efficient hybridization between the anchored-ANOVA and the POD/Kriging methods, which have already been used in CFD-UQ realistic applications, and the definition of best practices to achieve global accuracy. The anchored-ANOVA method is used to efficiently reduce the UQ dimension space, while the POD/Kriging is used to smooth and interpolate each anchored-ANOVA term. The main advantages of the proposed method are illustrated through four applications with increasing complexity, most of them based on Large-Eddy Simulation as a high-fidelity CFD tool: the turbulent channel flow, the flow around an isolated bluff-body, a pedestrian wind comfort study in a full scale urban area and an application to toxic gas dispersion in a full scale city area. The proposed c-APK method (anchored-ANOVA-POD/Kriging) inherits the advantages of each key element: interpolation through POD/Kriging precludes the use of quadrature schemes therefore allowing for a more flexible sampling strategy while the ANOVA decomposition allows for a better domain exploration. A comparison of the three methods is given for each application. In addition, the importance of adding flexibility to the control parameters and the choice of the quantity of interest (QoI) are discussed. As a result, global accuracy can be achieved with a reasonable number of samples allowing computationally expensive CFD-UQ analysis.

  16. Fatigue of NiTi SMA-pulley system using Taguchi and ANOVA

    Science.gov (United States)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel-titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA-pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA-pulley actuators and SMA actuators without pulley. Within the sample of SMA-pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA-pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA-pulley configuration. This novel observation will be applicable to many areas of SMA-pulley system applications development.

  17. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  18. Statistically significant contrasts between EMG waveforms revealed using wavelet-based functional ANOVA.

    Science.gov (United States)

    McKay, J Lucas; Welch, Torrence D J; Vidakovic, Brani; Ting, Lena H

    2013-01-01

    We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (~1/4;×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r(2) = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136

  19. Prediction and Control of Cutting Tool Vibration in Cnc Lathe with Anova and Ann

    Directory of Open Access Journals (Sweden)

    S. S. Abuthakeer

    2011-06-01

    Full Text Available Machining is a complex process in which many variables can deleterious the desired results. Among them, cutting tool vibration is the most critical phenomenon which influences dimensional precision of the components machined, functional behavior of the machine tools and life of the cutting tool. In a machining operation, the cutting tool vibrations are mainly influenced by cutting parameters like cutting speed, depth of cut and tool feed rate. In this work, the cutting tool vibrations are controlled using a damping pad made of Neoprene. Experiments were conducted in a CNC lathe where the tool holder is supported with and without damping pad. The cutting tool vibration signals were collected through a data acquisition system supported by LabVIEW software. To increase the buoyancy and reliability of the experiments, a full factorial experimental design was used. Experimental data collected were tested with analysis of variance (ANOVA to understand the influences of the cutting parameters. Empirical models have been developed using analysis of variance (ANOVA. Experimental studies and data analysis have been performed to validate the proposed damping system. Multilayer perceptron neural network model has been constructed with feed forward back-propagation algorithm using the acquired data. On the completion of the experimental test ANN is used to validate the results obtained and also to predict the behavior of the system under any cutting condition within the operating range. The onsite tests show that the proposed system reduces the vibration of cutting tool to a greater extend.

  20. ANOVA-like differential expression (ALDEx) analysis for mixed population RNA-Seq.

    Science.gov (United States)

    Fernandes, Andrew D; Macklaim, Jean M; Linn, Thomas G; Reid, Gregor; Gloor, Gregory B

    2013-01-01

    Experimental variance is a major challenge when dealing with high-throughput sequencing data. This variance has several sources: sampling replication, technical replication, variability within biological conditions, and variability between biological conditions. The high per-sample cost of RNA-Seq often precludes the large number of experiments needed to partition observed variance into these categories as per standard ANOVA models. We show that the partitioning of within-condition to between-condition variation cannot reasonably be ignored, whether in single-organism RNA-Seq or in Meta-RNA-Seq experiments, and further find that commonly-used RNA-Seq analysis tools, as described in the literature, do not enforce the constraint that the sum of relative expression levels must be one, and thus report expression levels that are systematically distorted. These two factors lead to misleading inferences if not properly accommodated. As it is usually only the biological between-condition and within-condition differences that are of interest, we developed ALDEx, an ANOVA-like differential expression procedure, to identify genes with greater between- to within-condition differences. We show that the presence of differential expression and the magnitude of these comparative differences can be reasonably estimated with even very small sample sizes.

  1. Application of nuclear models

    International Nuclear Information System (INIS)

    The development of extensive experimental nuclear data base over the past three decades has been accompanied by parallel advancement of nuclear theory and models used to describe and interpret the measurements. This theoretical capability is important because of many nuclear data requirements that are still difficult, impractical, or even impossible to meet with present experimental techniques. Examples of such data needs are neutron cross sections for unstable fission products, which are required for neutron absorption corrections in reactor calculations; cross sections for transactinide nuclei that control production of long-lived nuclear wastes; and the extensive dosimetry, activation, and neutronic data requirements to 40 MeV that must accompany development of the Fusion Materials Irradation Test (FMIT) facility. In recent years systematic improvements have been made in the nuclear models and codes used in data evaluation and, most importantly, in the methods used to derive physically based parameters for model calculations. The newly issued ENDF/B-V evaluated data library relies in many cases on nuclear reaction theory based on compound-nucleus Hauser-Feshbach, preequilibrium and direct reaction mechanisms as well as spherical and deformed optical-model theories. The development and applications of nuclear models for data evaluation are discussed with emphasis on the 1 to 40 MeV neutron energy range

  2. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    and the development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...

  3. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  4. Concrete fracture models and applications

    CERN Document Server

    Kumar, Shailendra

    2011-01-01

    Concrete-Fracture Models and Applications provides a basic introduction to nonlinear concrete fracture models. Readers will find a state-of-the-art review on various aspects of the material behavior and development of different concrete fracture models.

  5. Mathematical modeling with multidisciplinary applications

    CERN Document Server

    Yang, Xin-She

    2013-01-01

    Features mathematical modeling techniques and real-world processes with applications in diverse fields Mathematical Modeling with Multidisciplinary Applications details the interdisciplinary nature of mathematical modeling and numerical algorithms. The book combines a variety of applications from diverse fields to illustrate how the methods can be used to model physical processes, design new products, find solutions to challenging problems, and increase competitiveness in international markets. Written by leading scholars and international experts in the field, the

  6. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities. The applic...

  7. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  8. Finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.

  9. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    and selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...

  10. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis.

    Science.gov (United States)

    Campbell, Jamie I D; Thompson, Valerie A

    2012-12-01

    MorePower 6.0 is a flexible freeware statistical calculator that computes sample size, effect size, and power statistics for factorial ANOVA designs. It also calculates relational confidence intervals for ANOVA effects based on formulas from Jarmasz and Hollands (Canadian Journal of Experimental Psychology 63:124-138, 2009), as well as Bayesian posterior probabilities for the null and alternative hypotheses based on formulas in Masson (Behavior Research Methods 43:679-690, 2011). The program is unique in affording direct comparison of these three approaches to the interpretation of ANOVA tests. Its high numerical precision and ability to work with complex ANOVA designs could facilitate researchers' attention to issues of statistical power, Bayesian analysis, and the use of confidence intervals for data interpretation. MorePower 6.0 is available at https://wiki.usask.ca/pages/viewpageattachments.action?pageId=420413544 .

  11. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found......In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...

  12. Multilevel Models Applications Using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readersto understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. Itis at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®.Examples are drawn from analysis of real-world research data.

  13. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina;

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor. T...

  14. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  15. Association models for petroleum applications

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios

    2013-01-01

    Thermodynamics plays an important role in many applications in the petroleum industry, both upstream and downstream, ranging from flow assurance, (enhanced) oil recovery and control of chemicals to meet production and environmental regulations. There are many different applications in the oil & gas...... thermodynamic models like cubic equations of state have been the dominating tools in the petroleum industry, the focus of this review is on the association models. Association models are defined as the models of SAFT/CPA family (and others) which incorporate hydrogen bonding and other complex interactions....... Such association models have been, especially over the last 20 years, proved to be very successful in predicting many thermodynamic properties in the oil & gas industry. They have not so far replaced cubic equations of state, but the results obtained by using these models are very impressive in many cases, e...

  16. Geophysical Applications of Vegetation Modeling

    OpenAIRE

    J. O. Kaplan

    2001-01-01

    This thesis describes the development and selected applications of a global vegetation model, BIOME4. The model is applied to problems in high-latitude vegetation distribution and climate, trace gas production, and isotope biogeochemistry. It demonstrates how a modeling approach, based on principles of plant physiology and ecology, can be applied to interdisciplinary problems that cannot be adequately addressed by direct observations or experiments. The work is relevant to understanding the p...

  17. Sznajd model and its applications

    CERN Document Server

    Sznajd-Weron, K

    2005-01-01

    In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim "United we Stand, Divided we Fall" (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature.

  18. 数据统计分析软件SPSS的应用(四)--广义因素方差分析(GLM-General Factorial ANOVA)

    Institute of Scientific and Technical Information of China (English)

    张苏江; 陈庆波

    2003-01-01

    @@ 广义因素分析过程是广义线性模型(General Linear Models)模块的一个子模块,用于分析多个因素(变量)对一个因素(反应变量)的影响,包含了一般的方差分析内容,如完全随机设计资料的方差分析(one-way ANOVA),随机单位组设计资料的方差分析(two-way ANOVA),拉丁方设计资料的方差分析(three-way ANOVA),析因分析(factorial analysis),交叉设计(cross-over design),正交设计(orthogonal design),裂区设计(split-plot design)资料的方差分析,协方差

  19. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    Science.gov (United States)

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface.

  20. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    Science.gov (United States)

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface. PMID:16929931

  1. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  2. Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs

    Science.gov (United States)

    Pierce, Charles A.; Block, Richard A.; Aguinis, Herman

    2004-01-01

    The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…

  3. Use of "t"-Test and ANOVA in Career-Technical Education Research

    Science.gov (United States)

    Rojewski, Jay W.; Lee, In Heok; Gemici, Sinan

    2012-01-01

    Use of t-tests and analysis of variance (ANOVA) procedures in published research from three scholarly journals in career and technical education (CTE) during a recent 5-year period was examined. Information on post hoc analyses, reporting of effect size, alpha adjustments to account for multiple tests, power, and examination of assumptions…

  4. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...

  5. Applications of Continuum Shell Model

    OpenAIRE

    Volya, Alexander

    2006-01-01

    The nuclear many-body problem at the limits of stability is considered in the framework of the Continuum Shell Model that allows a unified description of intrinsic structure and reactions. Technical details behind the method are highlighted and practical applications combining the reaction and structure pictures are presented.

  6. Multi-level Clustering of Wear Particles Based on ANOVA-KW Test%基于ANOVA-KW检验的磨粒多级聚类分析

    Institute of Scientific and Technical Information of China (English)

    黄成; 王仲君; 潘岚; 吕植勇

    2010-01-01

    针对46个非正常磨粒样本形态参数数据进行分析,提出并确定多级聚类的思想以及具体实施步骤,然后通过对磨粒形态参数进行ANOVA-KW检验确定对磨粒分类有影响的参数变量,根据对确定的变量进行多级聚类分析,对球型、切削磨粒的识别度达到了95.6%,对疲劳、层状、片状磨粒的识别度达到了82.6%.

  7. APPLICATION OF TAGUCHI AND ANOVA IN OPTIMIZATION OF PROCESS PARAMETERS OF LAPPING OPERATION FOR CAST IRON

    Directory of Open Access Journals (Sweden)

    P.R. Parate

    2013-06-01

    Full Text Available Lapping appears like a miraculous process, because it can produce surfaces that are perfectly flat, perfectly round, perfectly smooth, perfectly sharp, or perfectly accurate. Under the correct circumstances, it can impart or improve precise geometry (flatness, roundness, etc., improve surface finish, improve surface quality, achieve high dimensional accuracy (length, diameter, etc., improve angular accuracy (worm gears, couplings, etc., improve fit, and above all, sharpen the tools. This paper presents research on calculating the material removal rate for a machining component by the lapping process. The cast iron sample with an outer diameter of 50 mm and an inner diameter of 45 mm was tested on a single plate tabletop lapping machine. Experiments based on design of experiments were conducted by varying lapping load, lapping time, paste concentration, lapping fluid, and by using different types of abrasives. The Taguchi statistical method has been used in this work. Optimum machining parameters for material removal rate are estimated and verified with experimental results and are found to be in good agreement. The confirmation test exhibits high material removal rate by using Al2O3 abrasive particles together with oil as a carrier fluid under the impression of high load. Further material removal rate increases with an increase in lapping load and time.

  8. 基于ANOVA-like方差分解的非线性系统控制性能评估%Control performance assessment based on ANOVA-like variance decomposition for nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    王志国; 刘飞

    2014-01-01

    In practice, many industrial control loops inevitably include nonlinearites, so the estimates of the control performance may not be correct. Firstly, the existence of the minimum variance performance lower bound(MVPLB) for a class of nonlinear systems is analyzed and the relation between the MVPLM and disturbance terms is determined. Then, the model of closed-loop system is identified by using orthogonal least square algorithm. Based on the achieved model, the contribution to the output variance due to the uncertainties in most recent ahead disturbance terms is calculated according to ANOVA-like decomposition formula, so that the control performance of the nonlinear system is obtained. Finally, simulation results show the effectiveness and feasibility of the proposed algorithm.%针对实际控制回路大多包含非线性特征,导致评估结果存在偏差的问题,以一类非线性系统为对象,首先分析其最小方差性能下限的存在性,并推导出其与系统干扰项的关系式;然后用正交最小二乘方法辨识系统闭环模型,进而使用ANOVA-like方差分解公式估计超前干扰项对输出方差的贡献,由此获得非线性系统的控制性能;最后,将所提出的方法与传统方法通过仿真实例进行比较。仿真结果表明,所提出的方法是可行且有效的。

  9. Non-parametric three-way mixed ANOVA with aligned rank tests.

    Science.gov (United States)

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods. PMID:24303958

  10. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    SABAN OZER; HASAN ZORLU; SELCUK METE

    2016-06-01

    Generally, memoryless polynomial nonlinear model for nonlinear part and finite impulse response (FIR) model or infinite impulse response model for linear part are preferred in Hammerstein models in literature. In this paper, system identification applications of Hammerstein model that is cascade of nonlinear second order volterra and linear FIR model are studied. Recursive least square algorithm is used to identify the proposed Hammerstein model parameters. Furthermore, the results are compared to identify the success of proposed Hammerstein model and different types of models

  11. Analysis of Aluminium Nano Composites using Anova in CNC Machining Process

    Directory of Open Access Journals (Sweden)

    Maria Joe Christopher Poonthota Irudaya Raj

    2013-08-01

    Full Text Available The Objective of this work is to reinforce the Aluminum alloy with CNT by Stir Casting Method in different weight percentage of CNT was added to Aluminium separately to make composites and it physical and thermal properties have been investigated using test like tensile, hardness, Micro Structure and XRD. The improvement of mechanical, Physical and thermal properties for both the cases has been compared with pure aluminum. The TAGUCHI – ORTHOGONAL ARRAY experimental technique is used to optimize the machining parameters. The predicted surface roughness was estimated using S/N ratio and compared with actual values. ANOVA analysis is used to find the significant factors affecting the machining process in order to improve the surface characteristics of Al Material.

  12. Geophysical data integration, stochastic simulation and significance analysis of groundwater responses using ANOVA in the Chicot Aquifer system, Louisiana, USA

    Science.gov (United States)

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.

    2008-01-01

    Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.

  13. A Unified ASrchitecture Model of Web Applications

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the increasing popularity,scale and complexity of web applications,design and development of web applications are becoming more and more difficult,However,the current state of their design and development is characterized by anarchy and ad hoc methodologies,One of the causes of this chaotic situation is that different researchers and designers have different understanding of web applications.In this paper,based on an explicit understanding of web applications,we present a unified architecture model of wed applications,the four-view model,which addresses the analysis and design issues of web applications from four perspectives,namely,logical view,data view,navigation view and presentation view,each addrssing a specific set of concerns of web applications,the purpose of the model is to provide a clear picture of web applications to alleviate the chaotic situation and facilitate its analysis,design and implementation.

  14. Model validation, science and application

    NARCIS (Netherlands)

    Builtjes, P.J.H.; Flossmann, A.

    1998-01-01

    Over the last years there is a growing interest to try to establish a proper validation of atmospheric chemistry-transport (ATC) models. Model validation deals with the comparison of model results with experimental data, and in this way adresses both model uncertainty and uncertainty in, and adequac

  15. Monte Carlo evaluation of the ANOVA's F and Kruskal-Wallis tests under binomial distribution

    Directory of Open Access Journals (Sweden)

    Eric Batista Ferreira

    2012-12-01

    Full Text Available To verify the equality of more than two levels of a factor under interest in experiments conducted under a completely randomized design (CRD it is common to use the F ANOVA test, which is considered the most powerful test for this purpose. However, the reliability of such results depends on the following assumptions: additivity of effects, independence, homoscedasticity and normality of the errors. The nonparametric Kruskal-Wallis test requires more moderate assumptions and therefore it is an alternative when the assumptions required by the F test are not met. However, the stronger the assumptions of a test, the better its performance. When the fundamental assumptions are met the F test is the best option. In this work, the normality of the errors is violated. Binomial response variables are simulated in order to compare the performances of the F and Kruskal-Wallis tests when one of the analysis of variance assumptions is not satisfied. Through Monte Carlo simulation, were simulated $3,150,000$ experiments to evaluate the type I error rate and power rate of the tests. In most situations, the power of the F test was superior to the Kruskal-Wallis and yet, the F test controlled the Type I error rates.

  16. Value added analysis and its distribution: a study on BOVESPA-listed banks using ANOVA

    Directory of Open Access Journals (Sweden)

    Leonardo José Seixas Pinto

    2013-05-01

    Full Text Available The value added generated by the financial institutions listed on BOVESPA and its distribution in the years between 2007 to 2011 are the subject of this research which shows how banks divided his wealth with the people, government, third parties and shareholders. Through the use of ANOVA test average in the companies that took part in this research concluded that: (a the average value added of foreign banks differs from national banks. (b The remuneration policy of equity foreign banks differs from national banks. (c The policy of distribution of value added to employees of foreign banks Santander and HSBC differs from the other banks. (d Taxes paid to the government have equal means with the exception of Santander. (e Although curious, Banco Itau and Banco do Brazil is equal in all analyzes in the distribution of value added since it is a private and one public. It appears this way a policy unequal distribution of wealth generation and foreign banks compared with the national public and private banks.

  17. Identification of bacteriophage virion proteins by the ANOVA feature selection and analysis.

    Science.gov (United States)

    Ding, Hui; Feng, Peng-Mian; Chen, Wei; Lin, Hao

    2014-08-01

    The bacteriophage virion proteins play extremely important roles in the fate of host bacterial cells. Accurate identification of bacteriophage virion proteins is very important for understanding their functions and clarifying the lysis mechanism of bacterial cells. In this study, a new sequence-based method was developed to identify phage virion proteins. In the new method, the protein sequences were initially formulated by the g-gap dipeptide compositions. Subsequently, the analysis of variance (ANOVA) with incremental feature selection (IFS) was used to search for the optimal feature set. It was observed that, in jackknife cross-validation, the optimal feature set including 160 optimized features can produce the maximum accuracy of 85.02%. By performing feature analysis, we found that the correlation between two amino acids with one gap was more important than other correlations for phage virion protein prediction and that some of the 1-gap dipeptides were important and mainly contributed to the virion protein prediction. This analysis will provide novel insights into the function of phage virion proteins. On the basis of the proposed method, an online web-server, PVPred, was established and can be freely accessed from the website (http://lin.uestc.edu.cn/server/PVPred). We believe that the PVPred will become a powerful tool to study phage virion proteins and to guide the related experimental validations.

  18. Application of numerical models and codes

    OpenAIRE

    Vyzikas, Thomas

    2014-01-01

    This report indicates the importance of numerical modelling in the modelling process, gradually builds the essential background theory in the fields of fluid mechanics, wave mechanics and numerical modelling, discusses a list of commonly used software and finally recommends which models are more suitable for different engineering applications in a marine renewable energy project.

  19. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  20. Business model concept and application

    OpenAIRE

    Ogonowska, Kinga

    2010-01-01

    In this thesis I would like to clarify the major approached to business models, define business model innovation, identify types of business models and innovations that are applied in the companies under research, indicate strengths and weaknesses of the business models studied and determine their innovative value. The sources of data include secondary from literature review, reports, corporate web pages and primary data from the interviews with employees of the Polish companies under ...

  1. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. (Brookhaven National Lab., Upton, NY (United States)); Vesely, W.E. (Science Applications International Corp., Columbus, OH (United States))

    1991-01-01

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  2. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Columbus, OH (United States)

    1991-12-31

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  3. PEM Fuel Cells - Fundamentals, Modeling and Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  4. PEM Fuel Cells - Fundamentals, Modeling and Applications

    OpenAIRE

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  5. Investigation of flood pattern using ANOVA statistic and remote sensing in Malaysia

    International Nuclear Information System (INIS)

    Flood is an overflow or inundation that comes from river or other body of water and causes or threatens damages. In Malaysia, there are no formal categorization of flood but often broadly categorized as monsoonal, flash or tidal floods. This project will be focus on flood causes by monsoon. For the last few years, the number of extreme flood was occurred and brings great economic impact. The extreme weather pattern is the main sector contributes for this phenomenon. In 2010, several districts in the states of Kedah neighbour-hoods state have been hit by floods and it is caused by tremendous weather pattern. During this tragedy, the ratio of the rainfalls volume was not fixed for every region, and the flood happened when the amount of water increase rapidly and start to overflow. This is the main objective why this project has been carried out, and the analysis data has been done from August until October in 2010. The investigation was done to find the possibility correlation pattern parameters related to the flood. ANOVA statistic was used to calculate the percentage of parameters was involved and Regression and correlation calculate the strength of coefficient among parameters related to the flood while remote sensing image was used for validation between the calculation accuracy. According to the results, the prediction is successful as the coefficient of relation in flood event is 0.912 and proved by Terra-SAR image on 4th November 2010. The rates of change in weather pattern give the impact to the flood

  6. Photocell modelling for thermophotovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Mayor, J.-C.; Durisch, W.; Grob, B.; Panitz, J.-C. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Goal of the modelling described here is the extrapolation of the performance characteristics of solar photocells to TPV working conditions. The model accounts for higher flux of radiation and for the higher temperatures reached in TPV converters. (author) 4 figs., 1 tab., 2 refs.

  7. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  8. Dual Security Testing Model for Web Applications

    Directory of Open Access Journals (Sweden)

    Singh Garima

    2016-02-01

    Full Text Available In recent years, web applications have evolved from small websites into large multi-tiered applications. The quality of web applications depends on the richness of contents, well structured navigation and most importantly its security. Web application testing is a new field of research so as to ensure the consistency and quality of web applications. In the last ten years there have been different approaches. Models have been developed for testing web applications but only a few focused on content testing, a few on navigation testing and a very few on security testing of web applications. There is a need to test content, navigation and security of an application in one go. The objective of this paper is to propose Dual Security Testing Model to test the security of web applications using UML modeling technique which includes web socket interface. In this research paper we have described how our security testing model is implemented using activity diagram, activity graph and based on this how test cases is generated.

  9. Dynamic programming models and applications

    CERN Document Server

    Denardo, Eric V

    2003-01-01

    Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, more. 1982 edition.

  10. A model for assessment of telemedicine applications

    DEFF Research Database (Denmark)

    Kidholm, Kristian; Ekeland, Anne Granstrøm; Jensen, Lise Kvistgaard;

    2012-01-01

    Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...... the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study....

  11. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  12. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  13. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  14. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  15. An Application on Multinomial Logistic Regression Model

    OpenAIRE

    Abdalla M El-Habil

    2012-01-01

    This study aims to identify an application of Multinomial Logistic Regression model which is one of the important methods for categorical data analysis. This model deals with one nominal/ordinal response variable that has more than two categories, whether nominal or ordinal variable. This model has been applied in data analysis in many areas, for example health, social, behavioral, and educational.To identify the model by practical way, we used real data on physical violence against children...

  16. Distance Education Instructional Model Applications.

    Science.gov (United States)

    Jackman, Diane H.; Swan, Michael K.

    1995-01-01

    A survey of graduate students involved in distance education on North Dakota State University's Interactive Video Network included 80 on campus and 13 off. The instructional models rated most effective were role playing, simulation, jurisprudential (Socratic method), memorization, synectics, and inquiry. Direct instruction was rated least…

  17. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  18. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  19. Thermoviscoplastic model with application to copper

    Science.gov (United States)

    Freed, Alan D.

    1988-01-01

    A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.

  20. Degenerate RFID Channel Modeling for Positioning Applications

    Directory of Open Access Journals (Sweden)

    A. Povalac

    2012-12-01

    Full Text Available This paper introduces the theory of channel modeling for positioning applications in UHF RFID. It explains basic parameters for channel characterization from both the narrowband and wideband point of view. More details are given about ranging and direction finding. Finally, several positioning scenarios are analyzed with developed channel models. All the described models use a degenerate channel, i.e. combined signal propagation from the transmitter to the tag and from the tag to the receiver.

  1. Application of Substitutional Model in Oxide Systems

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The application of substitutional model in oxide systems, in comparison with that of sublattice model, is discussed.The results show that in the case of crystalline phases and liquid phases without molecular-like associates or theshortage of element in sublattice, these two models get consistent in the description of the formalism of Gibbs freeenergies of phases and obtain the same result of phase diagram calculation when the valence of the cations keep thesame.

  2. Review of models applicable to accident aerosols

    International Nuclear Information System (INIS)

    Estimations of potential airborne-particle releases are essential in safety assessments of nuclear-fuel facilities. This report is a review of aerosol behavior models that have potential applications for predicting aerosol characteristics in compartments containing accident-generated aerosol sources. Such characterization of the accident-generated aerosols is a necessary step toward estimating their eventual release in any accident scenario. Existing aerosol models can predict the size distribution, concentration, and composition of aerosols as they are acted on by ventilation, diffusion, gravity, coagulation, and other phenomena. Models developed in the fields of fluid mechanics, indoor air pollution, and nuclear-reactor accidents are reviewed with this nuclear fuel facility application in mind. The various capabilities of modeling aerosol behavior are tabulated and discussed, and recommendations are made for applying the models to problems of differing complexity

  3. Benchmark of tyre models for mechatronic application

    OpenAIRE

    Carulla Castellví, Marina

    2010-01-01

    In this paper a comparison matrix is developed in order to examine three tyre models through nine criteria. These criteria are obtained after the requirements study of the main vehicle-dynamics mechatronic applications, such as ABS, ESP, TCS and EPAS. The present study proposes a weight for each criterion related to its importance to the mentioned applications. These weights are obtained by taking into account both practical and theoretical judgement. The former was collected through experts‟...

  4. Parallel Computing Applications and Financial Modelling

    OpenAIRE

    Liddell, Heather M.; Parkinson, D.; Hodgson, G S; Dzwig, P.

    2004-01-01

    At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs). The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are describ...

  5. Application Note: Power Grid Modeling With Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  6. Models of organometallic complexes for optoelectronic applications

    CERN Document Server

    Jacko, A C; Powell, B J

    2010-01-01

    Organometallic complexes have potential applications as the optically active components of organic light emitting diodes (OLEDs) and organic photovoltaics (OPV). Development of more effective complexes may be aided by understanding their excited state properties. Here we discuss two key theoretical approaches to investigate these complexes: first principles atomistic models and effective Hamiltonian models. We review applications of these methods, such as, determining the nature of the emitting state, predicting the fraction of injected charges that form triplet excitations, and explaining the sensitivity of device performance to small changes in the molecular structure of the organometallic complexes.

  7. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA)

    OpenAIRE

    Rakesh,; Satish Kumar

    2015-01-01

    The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG) of Mild Steel 1018 grade by Analysis of Variance (ANOVA). The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which c...

  8. Incomplete quality of life data in lung transplant research: comparing cross sectional, repeated measures ANOVA, and multi-level analysis

    Directory of Open Access Journals (Sweden)

    van der Bij Wim

    2005-09-01

    Full Text Available Abstract Background In longitudinal studies on Health Related Quality of Life (HRQL it frequently occurs that patients have one or more missing forms, which may cause bias, and reduce the sample size. Aims of the present study were to address the problem of missing data in the field of lung transplantation (LgTX and HRQL, to compare results obtained with different methods of analysis, and to show the value of each type of statistical method used to summarize data. Methods Results from cross-sectional analysis, repeated measures on complete cases (ANOVA, and a multi-level analysis were compared. The scores on the dimension 'energy' of the Nottingham Health Profile (NHP after transplantation were used to illustrate the differences between methods. Results Compared to repeated measures ANOVA, the cross-sectional and multi-level analysis included more patients, and allowed for a longer period of follow-up. In contrast to the cross sectional analyses, in the complete case analysis, and the multi-level analysis, the correlation between different time points was taken into account. Patterns over time of the three methods were comparable. In general, results from repeated measures ANOVA showed the most favorable energy scores, and results from the multi-level analysis the least favorable. Due to the separate subgroups per time point in the cross-sectional analysis, and the relatively small number of patients in the repeated measures ANOVA, inclusion of predictors was only possible in the multi-level analysis. Conclusion Results obtained with the various methods of analysis differed, indicating some reduction of bias took place. Multi-level analysis is a useful approach to study changes over time in a data set where missing data, to reduce bias, make efficient use of available data, and to include predictors, in studies concerning the effects of LgTX on HRQL.

  9. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... these areas are thus important. PK models are concerned with describing the concentration profile of a drug in both humans and animals after drug intake whereas PD models are used to describe the effect of a drug in relation to the drug concentration. PK models for an individual are usually described...... the individual in almost any thinkable way. This project focuses on measuring the eects on sleep in both humans and animals. The sleep process is usually analyzed by categorizing small time segments into a number of sleep states and this can be modelled using a Markov process. For this purpose new methods...

  10. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  11. Ionospheric Modeling for Precise GNSS Applications

    NARCIS (Netherlands)

    Memarzadeh, Y.

    2009-01-01

    The main objective of this thesis is to develop a procedure for modeling and predicting ionospheric Total Electron Content (TEC) for high precision differential GNSS applications. As the ionosphere is a highly dynamic medium, we believe that to have a reliable procedure it is necessary to transfer t

  12. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  13. Application of RBAC Model in System Kernel

    Directory of Open Access Journals (Sweden)

    Guan Keqing

    2012-11-01

    Full Text Available In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We built an access control model of system’s process, and proposed an implement framework. And the methods of implementation of the model for operation systems were illustrated.

  14. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  15. PERFORMANCE MEASUREMENT IN A PUBLIC SECTOR PASSENGER BUS TRANSPORT COMPANY USING FUZZY TOPSIS, FUZZY AHP AND ANOVA – A CASE STUDY

    Directory of Open Access Journals (Sweden)

    M.VETRIVEL SEZHIAN,

    2011-02-01

    Full Text Available This paper aims to assess the performance of three depots of a public sector bus passenger transport company. The performance data have been collected from the real users. The feedback obtained from thedepot managers are predominantly quantitative whereas the feedback information obtained from the regular passengers are of purely qualitative basis. These quantitative and qualitative data has beenanalyzed with multi-criteria decision making model. The Technique for Order Preference by Similarity to Ideal Solution method for decision making problems with Fuzzy data (FTOPSIS and Fuzzy AnalyticalHierarchy Process (FAHP has been used for the managers’ feedback and the One-way Analysis of Variance (ANOVA has been used for the passengers’ information. The values obtained have been combined to obtain the final results. The overall systematic algorithm for determining the best performing depot has been illustrated in step by step basis for continuous improvement.

  16. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  17. Parallel Computing Applications and Financial Modelling

    Directory of Open Access Journals (Sweden)

    Heather M. Liddell

    2004-01-01

    Full Text Available At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs. The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are described in this paper, namely Portfolio Optimisation and Financial Risk Assessment.

  18. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  19. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  20. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  1. Comparison of ANOVA, McSweeney, Bradley, Harwell-Serlin, and Blair-Sawilowsky Tests in the Balanced 2x2x2 Layout.

    Science.gov (United States)

    Kelley, D. Lynn; And Others

    The Type I error and power properties of the 2x2x2 analysis of variance (ANOVA) and tests developed by McSweeney (1967), Bradley (1979), Harwell-Serlin (1989; Harwell, 1991), and Blair-Sawilowsky (1990) were compared using Monte Carlo methods. The ANOVA was superior under the Gaussian and uniform distributions. The Blair-Sawilowsky test was…

  2. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    Science.gov (United States)

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  3. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  4. 厚尾相依序列均值多变点 ANOVA 型检验%An ANOVA-type test for multiple change points in the mean of heavy-tailed dependent sequence

    Institute of Scientific and Technical Information of China (English)

    吕会琴; 赵文芝; 赵蕊

    2016-01-01

    In order to study multiple breaks detection of mean in heavy-tailed dependent se-quence,under the null hypothesis of no change against the alternative hypothesis of multiple change points,an ANOVA-type test statistic is proposed.Then the limiting distribution of the test statistic under the null hypothesis and the consistence of the test statistic is obtained re-spectively.Finally,the results of Monte Carlo is shown to support the argument.%为了研究厚尾相依序列均值的多变点检验问题,在厚尾相依随机变量序列原假设无变点与备择假设存在多个变点的假设检验下,提出 ANOVA 型的检验统计量。分别得到在原假设下统计量的极限分布,并对统计量的一致性检验进行推导证明。最后通过数值模拟验证了该方法的有效性。

  5. Monotonicity Properties of Dirichlet Integrals with Applications to the Multinomial Distribution and the Anova Test; A Draft.

    Science.gov (United States)

    Olkin, Ingram

    Bounds for the tails of Dirichlet integrals are established by showing that each integral as a function of the limits is a Schur function. In particular, it is shown how these bounds apply to the simultaneous analysis of variance test and to the multinomial distribution. (Author)

  6. THE CONNECTION IDENTIFICATION BETWEEN THE NET INVESTMENTS IN HOTELS AND RESTORANTS AND TOURISTIC ACCOMODATION CAPACITY BY USING THE ANOVA METHOD

    Directory of Open Access Journals (Sweden)

    Elena STAN

    2009-12-01

    Full Text Available In the purpose of giving the answers to customers’ harsh exigencies, in the Romanian tourism development hasto be taking into account especially the “accommodation” component. The dimension of technical and material base ofaccommodation can be express through: units’ number, rooms’ number, places number. The most used is “placesnumber” indicator. Nowadays as regarding the tourism Romanian investments there are special concerns caused bypeculiar determinations. The study aim is represented by identifying of a connection existence between net investmentsin hotels and restaurants and tourism accommodation capacity, registered among 2002 -2007period in Romania, byusing the dispersion analysis ANOVA method.

  7. Managing Event Information Modeling, Retrieval, and Applications

    CERN Document Server

    Gupta, Amarnath

    2011-01-01

    With the proliferation of citizen reporting, smart mobile devices, and social media, an increasing number of people are beginning to generate information about events they observe and participate in. A significant fraction of this information contains multimedia data to share the experience with their audience. A systematic information modeling and management framework is necessary to capture this widely heterogeneous, schemaless, potentially humongous information produced by many different people. This book is an attempt to examine the modeling, storage, querying, and applications of such an

  8. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  9. Systems Evaluation Methods, Models, and Applications

    CERN Document Server

    Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing

    2011-01-01

    A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique

  10. Application software development via model based design

    OpenAIRE

    Haapala, O. (Olli)

    2015-01-01

    This thesis was set to study the utilization of the MathWorks’ Simulink® program in model based application software development and its compatibility with the Vacon 100 inverter. The target was to identify all the problems related to everyday usage of this method and create a white paper of how to execute a model based design to create a Vacon 100 compatible system software. Before this thesis was started, there was very little knowledge of the compatibility of this method. However durin...

  11. Application of RBAC Model in System Kernel

    OpenAIRE

    Guan Keqing; Li Hongxin; Kong Xianli

    2012-01-01

    In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We bu...

  12. Sticker DNA computer model--PartⅡ:Application

    Institute of Scientific and Technical Information of China (English)

    XU Jin; LI Sanping; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore, it arouses attention and interest of scientists in many fields. In this paper, we extend and improve the sticker model, which will be definitely beneficial to the construction of DNA computer. This paper is the second part of our series paper, which mainly focuses on the application of sticker model. It mainly consists of the following three sections: the matrix representation of sticker model is first presented; then a brief review of the past research on graph and combinatorial optimization, such as the minimal set covering problem, the vertex covering problem, Hamiltonian path or cycle problem, the maximal clique problem, the maximal independent problem and the Steiner spanning tree problem, is described; Finally a DNA algorithm for the graph isomorphic problem based on the sticker model is given.

  13. A Novel Feature Selection Based on One-Way ANOVA F-Test for E-Mail Spam Classification

    Directory of Open Access Journals (Sweden)

    Nadir Omer Fadl Elssied

    2014-01-01

    Full Text Available Spam is commonly defined as unwanted e-mails and it became a global threat against e-mail users. Although, Support Vector Machine (SVM has been commonly used in e-mail spam classification, yet the problem of high data dimensionality of the feature space due to the massive number of e-mail dataset and features still exist. To improve the limitation of SVM, reduce the computational complexity (efficiency and enhancing the classification accuracy (effectiveness. In this study, feature selection based on one-way ANOVA F-test statistics scheme was applied to determine the most important features contributing to e-mail spam classification. This feature selection based on one-way ANOVA F-test is used to reduce the high data dimensionality of the feature space before the classification process. The experiment of the proposed scheme was carried out using spam base well-known benchmarking dataset to evaluate the feasibility of the proposed method. The comparison is achieved for different datasets, categorization algorithm and success measures. In addition, experimental results on spam base English datasets showed that the enhanced SVM (FSSVM significantly outperforms SVM and many other recent spam classification methods for English dataset in terms of computational complexity and dimension reduction.

  14. The natural emissions model (NEMO): Description, application and model evaluation

    Science.gov (United States)

    Liora, Natalia; Markakis, Konstantinos; Poupkou, Anastasia; Giannaros, Theodore M.; Melas, Dimitrios

    2015-12-01

    The aim of this study is the application and evaluation of a new computer model used for the quantification of emissions coming from natural sources. The Natural Emissions Model (NEMO) is driven by the meteorological data of the mesoscale numerical Weather Research and Forecasting (WRF) model and it estimates particulate matter (PM) emissions from windblown dust, sea salt aerosols (SSA) and primary biological aerosol particles (PBAPs). It also includes emissions from Biogenic Volatile Organic Compounds (BVOCs) from vegetation; however, this study focuses only on particle emissions. An application and evaluation of NEMO at European scale are presented. NEMO and the modelling system consisted of WRF model and the Comprehensive Air Quality Model with extensions (CAMx) were applied in a 30 km European domain for the year 2009. The computed domain-wide annual PM10 emissions from windblown dust, sea salt and PBAPs were 0.57 Tg, 20 Tg and 0.12 Tg, respectively. PM2.5 represented 6% and 33% of emitted windblown dust and sea salt, respectively. Natural emissions are characterized by high geographical and seasonal variations; windblown dust emissions were the highest during summer in the southern Europe and SSA production was the highest in Atlantic Ocean during the cold season while in Mediterranean Sea the highest SSA emissions were found over the Aegean Sea during summer. Modelled concentrations were compared with surface station measurements and showed that the model captured fairly well the contribution of the natural sources to PM levels over Europe. Dust concentrations correlated better when dust transport events from Sahara desert were absent while the simulation of sea salt episodes led to an improvement of model performance during the cold season.

  15. Intelligent Model for Traffic Safety Applications

    Directory of Open Access Journals (Sweden)

    C. Chellappan

    2012-01-01

    Full Text Available Problem statement: This study presents an analysis on road traffic system focused on the use of communications to detect dangerous vehicles on roads and highways and how it could be used to enhance driver safety. Approach: The intelligent traffic safety application model is based on all traffic flow theories developed in the last years, leading to reliable representations of road traffic, which is of major importance in achieving the attenuation of traffic problems. The model also includes the decision making process from the driver in accelerating, decelerating and changing lanes. Results: The individuality of each of these processes appears from the model parameters that are randomly generated from statistical distributions introduced as input parameters. Conclusion: This allows the integration of the individuality factor of the population elements yielding knowledge on various driving modes at wide variety of situations.

  16. Development of a Mobile Application for Building Energy Prediction Using Performance Prediction Model

    Directory of Open Access Journals (Sweden)

    Yu-Ri Kim

    2016-03-01

    Full Text Available Recently, the Korean government has enforced disclosure of building energy performance, so that such information can help owners and prospective buyers to make suitable investment plans. Such a building energy performance policy of the government makes it mandatory for the building owners to obtain engineering audits and thereby evaluate the energy performance levels of their buildings. However, to calculate energy performance levels (i.e., asset rating methodology, a qualified expert needs to have access to at least the full project documentation and/or conduct an on-site inspection of the buildings. Energy performance certification costs a lot of time and money. Moreover, the database of certified buildings is still actually quite small. A need, therefore, is increasing for a simplified and user-friendly energy performance prediction tool for non-specialists. Also, a database which allows building owners and users to compare best practices is required. In this regard, the current study developed a simplified performance prediction model through experimental design, energy simulations and ANOVA (analysis of variance. Furthermore, using the new prediction model, a related mobile application was also developed.

  17. Determining Application Runtimes Using Queueing Network Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Michael L. [Univ. of San Francisco, CA (United States)

    2006-12-14

    Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.

  18. Regional hyperthermia applicator design using FDTD modelling.

    Science.gov (United States)

    Kroeze, H; Van de Kamer, J B; De Leeuw, A A; Lagendijk, J J

    2001-07-01

    Recently published results confirm the positive effect of regional hyperthermia combined with external radiotherapy on pelvic tumours. Several studies have been published on the improvement of RF annular array applicator systems with dipoles and a closed water bolus. This study investigates the performance of a next-generation applicator system for regional hyperthermia with a multi-ring annular array of antennas and an open water bolus. A cavity slot antenna is introduced to enhance the directivity and reduce mutual coupling between the antennas. Several design parameters, i.e. dimensions, number of antennas and operating frequency, have been evaluated using several patient models. Performance indices have been defined to evaluate the effect of parameter variation on the specific absorption rate (SAR) distribution. The performance of the new applicator type is compared with the Coaxial TEM. Operating frequency appears to be the main parameter with a positive influence on the performance. A SAR increase in tumour of 1.7 relative to the Coaxial TEM system can be obtained with a three-ring, six-antenna per ring cavity slot applicator operating at 150 MHz.

  19. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  20. Application of Chaos Theory to Psychological Models

    Science.gov (United States)

    Blackerby, Rae Fortunato

    This dissertation shows that an alternative theoretical approach from physics--chaos theory--offers a viable basis for improved understanding of human beings and their behavior. Chaos theory provides achievable frameworks for potential identification, assessment, and adjustment of human behavior patterns. Most current psychological models fail to address the metaphysical conditions inherent in the human system, thus bringing deep errors to psychological practice and empirical research. Freudian, Jungian and behavioristic perspectives are inadequate psychological models because they assume, either implicitly or explicitly, that the human psychological system is a closed, linear system. On the other hand, Adlerian models that require open systems are likely to be empirically tenable. Logically, models will hold only if the model's assumptions hold. The innovative application of chaotic dynamics to psychological behavior is a promising theoretical development because the application asserts that human systems are open, nonlinear and self-organizing. Chaotic dynamics use nonlinear mathematical relationships among factors that influence human systems. This dissertation explores these mathematical relationships in the context of a sample model of moral behavior using simulated data. Mathematical equations with nonlinear feedback loops describe chaotic systems. Feedback loops govern the equations' value in subsequent calculation iterations. For example, changes in moral behavior are affected by an individual's own self-centeredness, family and community influences, and previous moral behavior choices that feed back to influence future choices. When applying these factors to the chaos equations, the model behaves like other chaotic systems. For example, changes in moral behavior fluctuate in regular patterns, as determined by the values of the individual, family and community factors. In some cases, these fluctuations converge to one value; in other cases, they diverge in

  1. An Application on Multinomial Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Abdalla M El-Habil

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE This study aims to identify an application of Multinomial Logistic Regression model which is one of the important methods for categorical data analysis. This model deals with one nominal/ordinal response variable that has more than two categories, whether nominal or ordinal variable. This model has been applied in data analysis in many areas, for example health, social, behavioral, and educational.To identify the model by practical way, we used real data on physical violence against children, from a survey of Youth 2003 which was conducted by Palestinian Central Bureau of Statistics (PCBS. Segment of the population of children in the age group (10-14 years for residents in Gaza governorate, size of 66,935 had been selected, and the response variable consisted of four categories. Eighteen of explanatory variables were used for building the primary multinomial logistic regression model. Model had been tested through a set of statistical tests to ensure its appropriateness for the data. Also the model had been tested by selecting randomly of two observations of the data used to predict the position of each observation in any classified group it can be, by knowing the values of the explanatory variables used. We concluded by using the multinomial logistic regression model that we can able to define accurately the relationship between the group of explanatory variables and the response variable, identify the effect of each of the variables, and we can predict the classification of any individual case.

  2. Web Application for Modeling Global Antineutrinos

    CERN Document Server

    Barna, Andrew

    2015-01-01

    Electron antineutrinos stream freely from rapidly decaying fission products within nuclear reactors and from long-lived radioactivity within Earth. Those with energy greater than 1.8 MeV are regularly observed by several kiloton-scale underground detectors. These observations estimate the amount of terrestrial radiogenic heating, monitor the operation of nuclear reactors, and measure the fundamental properties of neutrinos. The analysis of antineutrino observations at operating detectors or the planning of projects with new detectors requires information on the expected signal and background rates. We present a web application for modeling global antineutrino energy spectra and detection rates for any surface location. Antineutrino sources include all registered nuclear reactors as well as the crust and mantle of Earth. Visitors to the website may model the location and power of a hypothetical nuclear reactor, copy energy spectra, and analyze the significance of a selected signal relative to background.

  3. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Rakesh

    2015-11-01

    Full Text Available The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG of Mild Steel 1018 grade by Analysis of Variance (ANOVA. The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which consist three input parameters. Root gap has greatest effect on tensile strength followed by welding current and arc voltage. Arc voltage has greatest effect on hardness followed by root gap and welding current. Weld metal consists of fine grains of ferrite and pearlite.

  4. Stability and adaptability analysis of rice cultivars using environment-centered yield in two-way ANOVA model

    Directory of Open Access Journals (Sweden)

    D. Sumith De. Z. Abeysiriwardena

    2011-12-01

    Full Text Available Identification of rice varieties with wider adaptability and stability are the important aspects in varietal recommendation to achieve better economic benefits for farmers. Multi locational trails are conducted in different locations / seasons to test and identify the consistently performing varieties in wider environments and location specific high performing varieties. The interaction aspect of varieties with environment is complex and highly variable across locations. Thus, the identifying varieties under these circumstances are difficult for varietal recommendations. However, several methods have been proposed in the recent past with the complex computation requirements. But, the aid of statistical software and other programs capabilities ease the complexity to a large extent. In this study, we employed one of the established techniques called variance component analysis (VCA to make the varietal recommendation for wider adaptability for many varying environments and the location specific recommendations. In this method variety × environment interaction is portioned into components for individual varieties using yield deviation approach. The average effect of variety (environment centered yield deviation - Dk and the stability measure of each variety (variety interaction variance -Sk2 are used make the recommendations. The rice yield data of cultivars of three month maturity duration, cultivated across diverse environments during the 2002/03 wet–season in Sri Lanka was analyzed for making recommendations. Based on the results the variety At581 gave the highest D2ksk value with wide adaptability selected for general recommendation. Varieties Bg305 and At303 also had relatively higher Dk and thus these two can also be selected for general cultivation purpose.

  5. A conceptual holding model for veterinary applications

    Directory of Open Access Journals (Sweden)

    Nicola Ferrè

    2014-05-01

    Full Text Available Spatial references are required when geographical information systems (GIS are used for the collection, storage and management of data. In the veterinary domain, the spatial component of a holding (of animals is usually defined by coordinates, and no other relevant information needs to be interpreted or used for manipulation of the data in the GIS environment provided. Users trying to integrate or reuse spatial data organised in such a way, frequently face the problem of data incompatibility and inconsistency. The root of the problem lies in differences with respect to syntax as well as variations in the semantic, spatial and temporal representations of the geographic features. To overcome these problems and to facilitate the inter-operability of different GIS, spatial data must be defined according to a “schema” that includes the definition, acquisition, analysis, access, presentation and transfer of such data between different users and systems. We propose an application “schema” of holdings for GIS applications in the veterinary domain according to the European directive framework (directive 2007/2/EC - INSPIRE. The conceptual model put forward has been developed at two specific levels to produce the essential and the abstract model, respectively. The former establishes the conceptual linkage of the system design to the real world, while the latter describes how the system or software works. The result is an application “schema” that formalises and unifies the information-theoretic foundations of how to spatially represent a holding in order to ensure straightforward information-sharing within the veterinary community.

  6. Validation and application of the SCALP model

    Science.gov (United States)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO

  7. Acceptance of health information technology in health professionals: an application of the revised technology acceptance model.

    Science.gov (United States)

    Ketikidis, Panayiotis; Dimitrovski, Tomislav; Lazuras, Lambros; Bath, Peter A

    2012-06-01

    The response of health professionals to the use of health information technology (HIT) is an important research topic that can partly explain the success or failure of any HIT application. The present study applied a modified version of the revised technology acceptance model (TAM) to assess the relevant beliefs and acceptance of HIT systems in a sample of health professionals (n = 133). Structured anonymous questionnaires were used and a cross-sectional design was employed. The main outcome measure was the intention to use HIT systems. ANOVA was employed to examine differences in TAM-related variables between nurses and medical doctors, and no significant differences were found. Multiple linear regression analysis was used to assess the predictors of HIT usage intentions. The findings showed that perceived ease of use, but not usefulness, relevance and subjective norms directly predicted HIT usage intentions. The present findings suggest that a modification of the original TAM approach is needed to better understand health professionals' support and endorsement of HIT. Perceived ease of use, relevance of HIT to the medical and nursing professions, as well as social influences, should be tapped by information campaigns aiming to enhance support for HIT in healthcare settings.

  8. Ethyl chitosan synthesis and quantification of the effects acquired after grafting it on a cotton fabric, using ANOVA statistical analysis.

    Science.gov (United States)

    Popescu, Vasilica; Muresan, Augustin; Popescu, Gabriel; Balan, Mihaela; Dobromir, Marius

    2016-03-15

    Three ethyl chitosans (ECSs) have been prepared using the ethyl chloride (AA) that was obtained in situ. Each ECS was applied on a 100% cotton fabric through a pad-dry-cure technology. Using the ANOVA as statistic method, the wrinkle-proofing effects have been determined varying the concentrations of AA (0.1-2.1mmol) and chitosan (CS) (0.1-2.1mmol). Alkylation and grafting mechanisms have been confirmed by the results of FTIR, (1)H NMR, XPS, SEM, DSC and termogravimetric analyses. The performances of each ECS as wrinkle-proofing agent have been revealed through quantitative methods (taking-up degree, wrinkle-recovering angle, tensile strength and effect's durability). The ECSs confer wrinkle-recovering angle and tensile strength higher than those of the witness sample. Durability of ECSs grafted on cotton have been demonstrated by a good capacity of dyeing with non-specific (acid/anionic and cationic) dyes under severe working conditions (100°C, 60min) and a good antimicrobial capacity.

  9. 基于 ANOVA-IPA的旅行社售后服务研究--以H市为例

    Institute of Scientific and Technical Information of China (English)

    赵静; 龚荷

    2013-01-01

    旅行社售后服务对于增强游客让渡价值感知有着非常重要的作用,是培育忠诚顾客的有力保证。在科学认识售后服务的内涵、意义及方式途径的基础上,以问卷调查为途径获得相关数据,从游客视角探析旅行社行业售后服务现状和游客期望。基于信息不对称理论、80/20法则和激励理论,并结合旅行社访谈资料,通过定性和定量分析构建ANOVA-IPA模型,确定优势区、维持区、弱势区和关注区,进而采取有针对性的措施拓展售后服务渠道,促进旅行社行业竞争向良性循环转变。

  10. Seismic Physical Modeling Technology and Its Applications

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduces the seismic physical modeling technology in the CNPC Key Lab of Geophysical Exploration. It includes the seismic physical model positioning system, the data acquisition system, sources, transducers,model materials, model building techniques, precision measurements of model geometry, the basic principles of the seismic physical modeling and experimental methods, and two physical model examples.

  11. Modeling of polymer networks for application to solid propellant formulating

    Science.gov (United States)

    Marsh, H. E.

    1979-01-01

    Methods for predicting the network structural characteristics formed by the curing of pourable elastomers were presented; as well as the logic which was applied in the development of mathematical models. A universal approach for modeling was developed and verified by comparison with other methods in application to a complex system. Several applications of network models to practical problems are described.

  12. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The…

  13. Novel applications of the dispersive optical model

    CERN Document Server

    Dickhoff, W H; Mahzoon, M H

    2016-01-01

    A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (\\textit{d,p}) transfer reaction using the adiabatic distorted wave approximation (ADWA). We review the fully non-local DOM...

  14. Unsteady aerodynamics modeling for flight dynamics application

    Science.gov (United States)

    Wang, Qing; He, Kai-Feng; Qian, Wei-Qi; Zhang, Tian-Jiao; Cheng, Yan-Qing; Wu, Kai-Yuan

    2012-02-01

    In view of engineering application, it is practicable to decompose the aerodynamics into three components: the static aerodynamics, the aerodynamic increment due to steady rotations, and the aerodynamic increment due to unsteady separated and vortical flow. The first and the second components can be presented in conventional forms, while the third is described using a one-order differential equation and a radial-basis-function (RBF) network. For an aircraft configuration, the mathematical models of 6-component aerodynamic coefficients are set up from the wind tunnel test data of pitch, yaw, roll, and coupled yawroll large-amplitude oscillations. The flight dynamics of an aircraft is studied by the bifurcation analysis technique in the case of quasi-steady aerodynamics and unsteady aerodynamics, respectively. The results show that: (1) unsteady aerodynamics has no effect upon the existence of trim points, but affects their stability; (2) unsteady aerodynamics has great effects upon the existence, stability, and amplitudes of periodic solutions; and (3) unsteady aerodynamics changes the stable regions of trim points obviously. Furthermore, the dynamic responses of the aircraft to elevator deflections are inspected. It is shown that the unsteady aerodynamics is beneficial to dynamic stability for the present aircraft. Finally, the effects of unsteady aerodynamics on the post-stall maneuverability are analyzed by numerical simulation.

  15. Unsteady aerodynamics modeling for flight dynamics application

    Institute of Scientific and Technical Information of China (English)

    Qing Wang; Kai-Feng He; Wei-Qi Qian; Tian-Jiao Zhang; Yan-Qing Cheng; Kai-Yuan Wu

    2012-01-01

    In view of engineering application,it is practicable to decompose the aerodynamics into three components:the static aerodynamics,the aerodynamic increment due to steady rotations,and the aerodynamic increment due to unsteady separated and vortical flow.The first and the second components can be presented in conventional forms,while the third is described using a one-order differential equation and a radial-basis-function (RBF) network. For an aircraft configuration,the mathematical models of 6-component aerodynamic coefficients are set up from the wind tunnel test data of pitch,yaw,roll,and coupled yawroll large-amplitude oscillations.The flight dynamics of an aircraft is studied by the bifurcation analysis technique in the case of quasi-steady aerodynamics and unsteady aerodynamics,respectively.The results show that:(1) unsteady aerodynamics has no effect upon the existence of trim points,but affects their stability; (2) unsteady aerodynamics has great effects upon the existence,stability,and amplitudes of periodic solutions; and (3) unsteady aerodynamics changes the stable regions of trim points obviously.Furthermore,the dynamic responses of the aircraft to elevator deflections are inspected.It is shown that the unsteady aerodynamics is beneficial to dynamic stability for the present aircraft.Finally,the effects of unsteady aerodynamics on the post-stall maneuverability are analyzed by numerical simulation.

  16. GOCE Exploitation for Moho Modeling and Applications

    Science.gov (United States)

    Sampierto, D.

    2011-07-01

    New ESA missions dedicated to the observation of the Earth from space, like the gravity-gradiometry mission GOCE and the radar altimetry mission CRYOSAT 2, foster research, among other subjects, also on inverse gravimetric problems and on the description of the nature and the geographical location of gravimetric signals. In this framework the GEMMA project (GOCE Exploitation for Moho Modeling and Applications), funded by the European Space Agency and Politecnico di Milano, aims at estimating the boundary between Earth's crust and mantle (the so called Mohorovičić discontinuity or Moho) from GOCE data in key regions of the world. In the project a solution based on a simple two layer model in spherical approximation is proposed. This inversion problem based on the linearization of the Newton's gravitational law around an approximate mean Moho surface will be solved by exploiting Wiener-Kolmogorov theory in the frequency domain where the depth of the Moho discontinuity will be treated as a random signal with a zero mean and its own covariance function. The algorithm can be applied in a numerically efficient way by using the Fast Fourier Transform. As for the gravity observations, we will consider grids of the anomalous gravitational potential and its second radial derivative at satellite altitude. In particular this will require first of all to elaborate GOCE data to obtain a local grid of the gravitational potential field and its second radial derivative and after that to separate the gravimetric signal due to the considered discontinuity from the gravitational effects of other geological structures present into the observations. The first problem can be solved by applying the so called space- wise approach to GOCE observations, while the second one can be achieved by considering a priori models and geophysical information by means of an appropriate Bayesan technique. Moreover other data such as ground gravity anomalies or seismic profiles can be combined, in an

  17. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  18. Plant growth and architectural modelling and its applications

    OpenAIRE

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this pref...

  19. Application of simulation models for the optimization of business processes

    Science.gov (United States)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  20. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  1. Photonic crystal fiber modelling and applications

    DEFF Research Database (Denmark)

    Bjarklev, Anders Overgaard; Broeng, Jes; Libori, Stig E. Barkou;

    2001-01-01

    Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented.......Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented....

  2. Spectral and chromatographic fingerprinting with analysis of variance-principal component analysis (ANOVA-PCA): a useful tool for differentiating botanicals and characterizing sources of variance

    Science.gov (United States)

    Objectives: Spectral fingerprints, acquired by direct injection (no separation) mass spectrometry (DI-MS) or liquid chromatography with UV detection (HPLC), in combination with ANOVA-PCA, were used to differentiate 15 powders of botanical materials. Materials and Methods: Powders of 15 botanical mat...

  3. Pinna Model for Hearing Instrument Applications

    DEFF Research Database (Denmark)

    Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren Helstrup; Thaysen, Jesper;

    2014-01-01

    A novel model of the pinna (outer ear) is presented. This is to increase the understanding of the effect of the pinna on the on-body radiation pattern of an antenna placed inside the ear. Simulations of the model and of a realistically shaped ear are compared to validate the model. The radiation ...

  4. The Nomad Model: Theory, Developments and Applications

    NARCIS (Netherlands)

    Campanella, M.; Hoogendoorn, S.P.; Daamen, W.

    2014-01-01

    This paper presents details of the developments of the Nomad model after being introduced more than 12 years ago. The model is derived from a normative theory of pedestrian behavior making it unique under microscopic models. Nomad has been successfully applied in several cases indicating that it ful

  5. Model Checking-Based Testing of Web Applications

    Institute of Scientific and Technical Information of China (English)

    ZENG Hongwei; MIAO Huaikou

    2007-01-01

    A formal model representing the navigation behavior of a Web application as the Kripke structure is proposed and an approach that applies model checking to test case generation is presented. The Object Relation Diagram as the object model is employed to describe the object structure of a Web application design and can be translated into the behavior model. A key problem of model checking-based test generation for a Web application is how to construct a set of trap properties that intend to cause the violations of model checking against the behavior model and output of counterexamples used to construct the test sequences.We give an algorithm that derives trap properties from the object model with respect to node and edge coverage criteria.

  6. Wealth distribution models: analisys and applications

    Directory of Open Access Journals (Sweden)

    Camilo Dagum

    2008-03-01

    Full Text Available After Pareto developed his Type I model in 1895, a large number of income distribution models have been specified. However, the important issue of wealth distribution called the attention of researchers more than sixty years later. It started with the contributions by Wold and Whittle, and Sargan, both published in 1957. The former authors proposed the Pareto Type I model and the latter the lognormal distribution, but they did not empirically validate them. Afterward, other models were proposed: in 1969 the Pareto Types I and II by Stiglitz; in 1975, the loglogistic by Atkinson and the Pearson Type V by Vaughan. In 1990 and 1994, Dagum developed a general model and his type II as models of wealth distribution. They were validated with real life data from the U.S.A., Canada, Italy and the U.K. In 1999, Dagum further developed his general model of net wealth distribution with support (??,? which contains, as particular cases, his Types I and II model of income and wealth distributions. This study presents and analyzes the proposed models of wealth distribution and their properties. The only model with the flexibility, power, and economic and stochastic foundations to accurately fit net and total wealth distributions is the Dagum general model and its particular cases as validated with the case studies of Ireland, U.K., Italy and U.S.A.

  7. Nonlinear dynamics new directions models and applications

    CERN Document Server

    Ugalde, Edgardo

    2015-01-01

    This book, along with its companion volume, Nonlinear Dynamics New Directions: Theoretical Aspects, covers topics ranging from fractal analysis to very specific applications of the theory of dynamical systems to biology. This second volume contains mostly new applications of the theory of dynamical systems to both engineering and biology. The first volume is devoted to fundamental aspects and includes a number of important new contributions as well as some review articles that emphasize new development prospects. The topics addressed in the two volumes include a rigorous treatment of fluctuations in dynamical systems, topics in fractal analysis, studies of the transient dynamics in biological networks, synchronization in lasers, and control of chaotic systems, among others. This book also: ·         Develops applications of nonlinear dynamics on a diversity of topics such as patterns of synchrony in neuronal networks, laser synchronization, control of chaotic systems, and the study of transient dynam...

  8. Optical Coherence Tomography: Modeling and Applications

    DEFF Research Database (Denmark)

    Thrane, Lars

    An analytical model is presented that is able to describe the performance of OCT systems in both the single and multiple scattering regimes simultaneously. This model inherently includes the shower curtain effect, well-known for light propagation through the atmosphere. This effect has been omitted...... in previous theoretical models of OCT systems. It is demonstrated that the shower curtain effect is of utmost importance in the theoretical description of an OCT system. The analytical model, together with proper noise analysis of the OCT system, enables calculation of the SNR, where the optical properties...... of the tissue are taken into account. Furthermore, by using the model, it is possible to determine the lateral resolution of OCT systems at arbitrary depths in the scattering tissue. During the Ph.D. thesis project, an OCT system has been constructed, and the theoretical model is verified experimentally using...

  9. Non-linear models: applications in economics

    OpenAIRE

    Albu, Lucian-Liviu

    2006-01-01

    The study concentrated on demonstrating how non-linear modelling can be useful to investigate the behavioural of dynamic economic systems. Using some adequate non-linear models could be a good way to find more refined solutions to actually unsolved problems or ambiguities in economics. Beginning with a short presentation of the simplest non-linear models, then we are demonstrating how the dynamics of complex systems, as the economic system is, could be explained on the base of some more advan...

  10. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  11. Nuclear reaction modeling, verification experiments, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  12. Human hand modelling: kinematics, dynamics, applications

    NARCIS (Netherlands)

    Gustus, A.; Stillfried, G.; Visser, J.; Jörntell, H.; Van der Smagt, P.

    2012-01-01

    An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand funct

  13. The DES-Model and Its Applications

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular the e...

  14. Sparse Multivariate Modeling: Priors and Applications

    DEFF Research Database (Denmark)

    Henao, Ricardo

    to use them as hypothesis generating tools. All of our models start from a family of structures, for instance factor models, directed acyclic graphs, classifiers, etc. Then we let them be selectively sparse as a way to provide them with structural fl exibility and interpretability. Finally, we complement...

  15. Modeling of Nuclear Electric Propulsion System for Naval Application

    Energy Technology Data Exchange (ETDEWEB)

    Halimi, B.; Suh, K. Y. [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    In a number of applications it is required to work for a long periods of time on the ocean, where supply of fuel is complicated and sometimes impossible. Moreover, high efficiency and compactness are the other important requirements in naval application. Therefore, an integrated nuclear electric propulsion system is the best choice to meet all of these requirements. In this paper, a modeling of nuclear electric propulsion for naval application is presented. The model adopted a long-term power system dynamics model to represent the dynamics of nuclear power part.

  16. The Geometric Modelling of Furniture Parts and Its Application

    Institute of Scientific and Technical Information of China (English)

    张福炎; 蔡士杰; 王玉兰; 居正文

    1989-01-01

    In this paper, a 3-D solid modelling method appropriate for the design of furniture parts, which has been used in FCAD (Computer Aided Design for Furniture Structure )system, is introduced. Some interactive functions for modifying part models and deriving a variety of practical parts are described. Finally. the application of the modelling method to computer aided manufacturing of furniture parts is prospected.

  17. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  18. Advanced Applications of Structural Equation Modeling in Counseling Psychology Research

    Science.gov (United States)

    Martens, Matthew P.; Haase, Richard F.

    2006-01-01

    Structural equation modeling (SEM) is a data-analytic technique that allows researchers to test complex theoretical models. Most published applications of SEM involve analyses of cross-sectional recursive (i.e., unidirectional) models, but it is possible for researchers to test more complex designs that involve variables observed at multiple…

  19. Multilevel Models: Conceptual Framework and Applicability

    Directory of Open Access Journals (Sweden)

    Roxana-Otilia-Sonia Hrițcu

    2015-10-01

    Full Text Available Individuals and the social or organizational groups they belong to can be viewed as a hierarchical system situated on different levels. Individuals are situated on the first level of the hierarchy and they are nested together on the higher levels. Individuals interact with the social groups they belong to and are influenced by these groups. Traditional methods that study the relationships between data, like simple regression, do not take into account the hierarchical structure of the data and the effects of a group membership and, hence, results may be invalidated. Unlike standard regression modelling, the multilevel approach takes into account the individuals as well as the groups to which they belong. To take advantage of the multilevel analysis it is important that we recognize the multilevel characteristics of the data. In this article we introduce the outlines of multilevel data and we describe the models that work with such data. We introduce the basic multilevel model, the two-level model: students can be nested into classes, individuals into countries and the general two-level model can be extended very easily to several levels. Multilevel analysis has begun to be extensively used in many research areas. We present the most frequent study areas where multilevel models are used, such as sociological studies, education, psychological research, health studies, demography, epidemiology, biology, environmental studies and entrepreneurship. We support the idea that since hierarchies exist everywhere, multilevel data should be recognized and analyzed properly by using multilevel modelling.

  20. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse.......1). Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered (section 1.2), and the solution for obtaining the steady state response, when using lumped-parameter models is given(section 1.2)....

  1. Modelling of Tape Casting for Ceramic Applications

    DEFF Research Database (Denmark)

    Jabbari, Masoud

    of functional ceramics research. Advances in ceramic forming have enabled low cost shaping techniques such as tape casting and extrusion to be used in some of the most challenging technologies. These advances allow the design of complex components adapted to desired specific properties and applications. However......Functional ceramics find use in many different applications of great interest, e.g. thermal barrier coatings, piezoactuators, capacitors, solid oxide fuel cells and electrolysis cells, membranes, and filters. It is often the case that the performance of a ceramic component can be increased markedly......, there is still only very limited insight into the processes determining the final properties of such components. Hence, the aim of the present PhD project is to obtain the required knowledge basis for the optimized processing of multi-material functional ceramics components. Recent efforts in the domain...

  2. Computational modeling of nanomaterials for biomedical applications

    OpenAIRE

    Verkhovtsev, Alexey

    2016-01-01

    Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological envi...

  3. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  4. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  5. Weighted Semiparameter Model and Its Application

    Directory of Open Access Journals (Sweden)

    Zhengqing Fu

    2014-01-01

    Full Text Available A weighted semiparameter estimate model is proposed. The parameter components and nonparameter components are weighted. The weights are determined by the characters of different data. Simulation data and real GPS data are both processed by the new model and least square estimate, ridge estimate, and semiparameter estimate. The main research method is to combine qualitative analysis and quantitative analysis. The deviation between estimated values and the true value and the estimated residuals fluctuation of different methods are used for qualitative analysis. The mean square error is used for quantitative analysis. The results of experiment show that the model has the smallest residual error and the minimum mean square error. The weighted semiparameter estimate model has effectiveness and high precision.

  6. The DES-model and its applications

    International Nuclear Information System (INIS)

    This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular the efficiencies and marginal costs of combined heat and power (CHP). Some associated models are briefly outlined, and the use of the model is described by examples concerning scenarios for the primary energy requirements and energy system costs up to the year 2000, planned development of the power and heating systems, assessment of nuclear power, and effects of changes in the energy supply system on the emissions of SO2 and NOsub(x). (author)

  7. Mathematical modeling and applications in nonlinear dynamics

    CERN Document Server

    Merdan, Hüseyin

    2016-01-01

    The book covers nonlinear physical problems and mathematical modeling, including molecular biology, genetics, neurosciences, artificial intelligence with classical problems in mechanics and astronomy and physics. The chapters present nonlinear mathematical modeling in life science and physics through nonlinear differential equations, nonlinear discrete equations and hybrid equations. Such modeling can be effectively applied to the wide spectrum of nonlinear physical problems, including the KAM (Kolmogorov-Arnold-Moser (KAM)) theory, singular differential equations, impulsive dichotomous linear systems, analytical bifurcation trees of periodic motions, and almost or pseudo- almost periodic solutions in nonlinear dynamical systems. Provides methods for mathematical models with switching, thresholds, and impulses, each of particular importance for discontinuous processes Includes qualitative analysis of behaviors on Tumor-Immune Systems and methods of analysis for DNA, neural networks and epidemiology Introduces...

  8. Stochastic properties of generalised Yule models, with biodiversity applications.

    Science.gov (United States)

    Gernhard, Tanja; Hartmann, Klaas; Steel, Mike

    2008-11-01

    The Yule model is a widely used speciation model in evolutionary biology. Despite its simplicity many aspects of the Yule model have not been explored mathematically. In this paper, we formalise two analytic approaches for obtaining probability densities of individual branch lengths of phylogenetic trees generated by the Yule model. These methods are flexible and permit various aspects of the trees produced by Yule models to be investigated. One of our methods is applicable to a broader class of evolutionary processes, namely the Bellman-Harris models. Our methods have many practical applications including biodiversity and conservation related problems. In this setting the methods can be used to characterise the expected rate of biodiversity loss for Yule trees, as well as the expected gain of including the phylogeny in conservation management. We briefly explore these applications.

  9. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  10. Application of Chebyshev Polynomial to simulated modeling

    Institute of Scientific and Technical Information of China (English)

    CHI Hai-hong; LI Dian-pu

    2006-01-01

    Chebyshev polynomial is widely used in many fields, and used usually as function approximation in numerical calculation. In this paper, Chebyshev polynomial expression of the propeller properties across four quadrants is given at first, then the expression of Chebyshev polynomial is transformed to ordinary polynomial for the need of simulation of propeller dynamics. On the basis of it,the dynamical models of propeller across four quadrants are given. The simulation results show the efficiency of mathematical model.

  11. Application of DARLAM to Regional Haze Modeling

    Science.gov (United States)

    Koe, L. C. C.; Arellano, A. F., Jr.; McGregor, J. L.

    - The CSIRO Division of Atmospheric Research limited area model (DARLAM) is applied to atmospheric transport modeling of haze in southeast Asia. The 1998 haze episode is simulated using an emission inventory derived from hotspot information and adopting removal processes based on SO2.Results show that the model is able to simulate the transport of haze in the region. The model images closely resemble the plumes of NASA Total Ozone Mapping Spectrometer and Meteorological Service Singapore haze maps. Despite the limitation of input data, particularly for haze emissions, the three-month average pattern correlation obtained for the whole episode is 0.61. The model has also been able to reproduce the general features of transboundary air pollution over a long period of time. Predicted total particulate matter concentration also agrees reasonably well with observation.The difference in the model results from the satellite images may be attributed to the large uncertainties of emission, simplification of haze deposition and transformation mechanisms and the relatively coarse horizontal and vertical resolution adopted for this particular simulation.

  12. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  13. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  14. Advances in Application of Models in Soil Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    SI Zhi-guo; WANG Ji-jie; YU Yuan-chun; LIANG Guan-feng; CHEN Chang-ren; SHU Hong-lan

    2012-01-01

    Soil quality is a comprehensive reflection of soil properties.Since the soil quality concept was put forward in the 1970s,the quality of different type soils in different regions have been evaluated through a variety of evaluation methods,but it still lacks universal soil quantity evaluation models and methods.In this paper,the applications and prospects of grey relevancy comprehensive evaluation model,attribute hierarchical model,fuzzy comprehensive evaluation model,matter-element model,RAGA-based PPC /PPE model and GIS model in soil quality evaluation are reviewed.

  15. Recent Applications of Hidden Markov Models in Computational Biology

    Institute of Scientific and Technical Information of China (English)

    Khar Heng Choo; Joo Chuan Tong; Louxin Zhang

    2004-01-01

    This paper examines recent developments and applications of Hidden Markov Models (HMMs) to various problems in computational biology, including multiple sequence alignment, homology detection, protein sequences classification, and genomic annotation.

  16. Expansion of the USDA ARS Aerial Application spray atomization models

    Science.gov (United States)

    An effort is underway to update the USDA ARS aerial spray nozzle models using new droplet sizing instrumen-tation and measurement techniques. As part of this effort, the applicable maximum airspeed is being increased from 72 to 80 m/s to provide guidance to applicators when using new high speed air...

  17. Using models to determine irrigation applications for water management

    Science.gov (United States)

    Simple models are used by field researchers and production agriculture to estimate crop water use for the purpose of scheduling irrigation applications. These are generally based on a simple volume balance approach based on estimates of soil water holding capacity, irrigation application amounts, pr...

  18. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  19. The Application Model of Moving Objects in Cargo Delivery System

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng-li; ZHOU Ming-tian; XU Bo

    2004-01-01

    The development of spatio-temporal database systems is primarily motivated by applications which track and present mobile objects. In this paper, solutions for establishing the moving object database based on GPS/GIS environment are presented, and a data modeling of moving object is given by using Temporal logical to extent the query language, finally the application model in cargo delivery system is shown.

  20. Network models in optimization and their applications in practice

    CERN Document Server

    Glover, Fred; Phillips, Nancy V

    2011-01-01

    Unique in that it focuses on formulation and case studies rather than solutions procedures covering applications for pure, generalized and integer networks, equivalent formulations plus successful techniques of network models. Every chapter contains a simple model which is expanded to handle more complicated developments, a synopsis of existing applications, one or more case studies, at least 20 exercises and invaluable references. An Instructor's Manual presenting detailed solutions to all the problems in the book is available upon request from the Wiley editorial department.

  1. Recognizing textual entailment models and applications

    CERN Document Server

    Dagan, Ido; Sammons, Mark

    2013-01-01

    In the last few years, a number of NLP researchers have developed and participated in the task of Recognizing Textual Entailment (RTE). This task encapsulates Natural Language Understanding capabilities within a very simple interface: recognizing when the meaning of a text snippet is contained in the meaning of a second piece of text. This simple abstraction of an exceedingly complex problem has broad appeal partly because it can be conceived also as a component in other NLP applications, from Machine Translation to Semantic Search to Information Extraction. It also avoids commitment to any sp

  2. Novel grey forecast model and its application

    Institute of Scientific and Technical Information of China (English)

    丁洪发; 舒双焰; 段献忠

    2003-01-01

    The advancement of grey system theory provides an effective analytic tool for power system load fore-cast. All kinds of presently available grey forecast models can be well used to deal with the short-term load fore-cast. However, they make big errors for medium or long-term load forecasts, and the load that does not satisfythe approximate exponential increasing law in particular. A novel grey forecast model that is capable of distin-guishing the increasing law of load is adopted to forecast electric power consumption (EPC) of Shanghai. Theresults show that this model can be used to greatly improve the forecast precision of EPC for a secondary industryor the whole society.

  3. Atmospheric dispersion models for application in relation to radionuclide releases

    International Nuclear Information System (INIS)

    In this document, a state-of-art review of dispersion models relevant to local, regional and global scales and applicable to radionuclide discharges of a continuous and discontinuous nature is presented. The theoretical basis of the models is described in chapter 2, while the uncertainty inherent in model predictions is considered in chapter 6. Chapters 3 to 5 of this report describe a number of models for calculating atmospheric dispersion on local, regional and global scales respectively

  4. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  5. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  6. A cutting force model for micromilling applications

    DEFF Research Database (Denmark)

    Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2006-01-01

    In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius.......In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....

  7. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  8. Model castings with composite surface layer - application

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2008-10-01

    Full Text Available The paper presents a method of usable properties of surface layers improvement of cast carbon steel 200–450, by put directly in foundingprocess a composite surface layer on the basis of Fe-Cr-C alloy. Technology of composite surface layer guarantee mainly increase inhardness and aberasive wear resistance of cast steel castings on machine elements. This technology can be competition for generallyapplied welding technology (surfacing by welding and thermal spraying. In range of studies was made cast steel test castings withcomposite surface layer, which usability for industrial applications was estimated by criterion of hardness and aberasive wear resistance of type metal-mineral and quality of joint cast steel – (Fe-Cr-C. Based on conducted studies a thesis, that composite surface layer arise from liquid state, was formulated. Moreover, possible is control of composite layer thickness and its hardness by suitable selection of parameters i.e. thickness of insert, pouring temperature and solidification modulus of casting. Possibility of technology application of composite surface layer in manufacture of cast steel slide bush for combined cutter loader is presented.

  9. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  10. A universal throw model and its applications

    NARCIS (Netherlands)

    Voort, M.M. van der; Doormaal, J.C.A.M. van; Verolme, E.K.; Weerheijm, J.

    2008-01-01

    A deterministic model has been developed that describes the throw of debris or fragments from a source with an arbitrary geometry and for arbitrary initial conditions. The initial conditions are defined by the distributions of mass, launch velocity and launch direction. The item density in an expose

  11. A marketing model: applications for dietetic professionals.

    Science.gov (United States)

    Parks, S C; Moody, D L

    1986-01-01

    Traditionally, dietitians have communicated the availability of their services to the "public at large." The expectation was that the public would respond favorably to nutrition programs simply because there was a consumer need for them. Recently, however, both societal and consumer needs have changed dramatically, making old communication strategies ineffective and obsolete. The marketing discipline has provided a new model and new decision-making tools for many health professionals to use to more effectively make their services known to multiple consumer groups. This article provides one such model as applied to the dietetic profession. The model explores a definition of the business of dietetics, how to conduct an analysis of the environment, and, finally, the use of both in the choice of new target markets. Further, the model discusses the major components of developing a marketing strategy that will help the practitioner to be competitive in the marketplace. Presented are strategies for defining and re-evaluating the mission of the profession, for using future trends to identify new markets and roles for the profession, and for developing services that make the profession more competitive by better meeting the needs of the consumer.

  12. Deposit 3D modeling and application

    Institute of Scientific and Technical Information of China (English)

    LUO Zhou-quan; LIU Xiao-ming; SU Jia-hong; WU Ya-bin; LIU Wang-ping

    2007-01-01

    By the aid of the international mining software SURPAC, a geologic database for a multi-metal mine was established, 3D models of the surface, geologic fault, ore body, cavity and the underground openings were built, and the volume of the cavity of the mine based on the cavity 3D model was calculated. In order to compute the reserves, a grade block model was built and each metal element grade was estimated using Ordinary Kriging. Then, the reserve of each metal element and every sublevel of the mine was worked out. Finally, the calculated result of each metal reserve to its actual prospecting reserve was compared, and the results show that they are all almost equal to each other. The absolute errors of Sn, Pb, and Zn reserves are only 1.45%, 1.59% and 1.62%,respectively. Obviously, the built models are reliable and the calculated results of reserves are correct. They can be used to assist the geologic and mining engineers of the mine to do research work of reserves estimation, mining design, plan making and so on.

  13. Adaptable Multivariate Calibration Models for Spectral Applications

    Energy Technology Data Exchange (ETDEWEB)

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  14. Integrated Safety Culture Model and Application

    Institute of Scientific and Technical Information of China (English)

    汪磊; 孙瑞山; 刘汉辉

    2009-01-01

    A new safety culture model is constructed and is applied to analyze the correlations between safety culture and SMS. On the basis of previous typical definitions, models and theories of safety culture, an in-depth analysis on safety culture's structure, composing elements and their correlations was conducted. A new definition of safety culture was proposed from the perspective of sub-cuhure. 7 types of safety sub-culture, which are safety priority culture, standardizing culture, flexible culture, learning culture, teamwork culture, reporting culture and justice culture were defined later. Then integrated safety culture model (ISCM) was put forward based on the definition. The model divided safety culture into intrinsic latency level and extrinsic indication level and explained the potential relationship between safety sub-culture and all safety culture dimensions. Finally in the analyzing of safety culture and SMS, it concluded that positive safety culture is the basis of im-plementing SMS effectively and an advanced SMS will improve safety culture from all around.

  15. The application of an empowerment model

    NARCIS (Netherlands)

    Molleman, E; van Delft, B; Slomp, J

    2001-01-01

    In this study we applied an empowerment model that focuses on (a) the need for empowerment in light of organizational strategy, (b) job design issues such as job enlargement and job enrichment that facilitate empowerment, and (c) the abilities, and (d) the attitudes of workers that make empowerment

  16. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  17. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  18. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  19. Molecular modeling and multiscaling issues for electronic material applications

    CERN Document Server

    Iwamoto, Nancy; Yuen, Matthew; Fan, Haibo

    Volume 1 : Molecular Modeling and Multiscaling Issues for Electronic Material Applications provides a snapshot on the progression of molecular modeling in the electronics industry and how molecular modeling is currently being used to understand material performance to solve relevant issues in this field. This book is intended to introduce the reader to the evolving role of molecular modeling, especially seen through the eyes of the IEEE community involved in material modeling for electronic applications.  Part I presents  the role that quantum mechanics can play in performance prediction, such as properties dependent upon electronic structure, but also shows examples how molecular models may be used in performance diagnostics, especially when chemistry is part of the performance issue.  Part II gives examples of large-scale atomistic methods in material failure and shows several examples of transitioning between grain boundary simulations (on the atomistic level)and large-scale models including an example ...

  20. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...... and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling.Multi level flow modeling (MFM), which is a method for functional modeling,is introduced briefly and illustrated with a cooling system example...

  1. Potential model application and planning issues

    Directory of Open Access Journals (Sweden)

    Christiane Weber

    2000-03-01

    Full Text Available Le modèle de potentiel a été et reste un modèle d'interaction spatiale utilisé pour diverses problématiques en sciences humaines, cependant l'utilisation qu'en ont fait Donnay (1997,1995,1994 et Binard (1995 en introduisant des résultats de traitement d'images comme support d'application a ouvert la voie à des applications novatrice par exemple, pour la détermination de la limite urbaine ou des hinterlands locaux. Les articulations possibles entre application du modèle de potentiel en imagerie et utilisation de plans de Système d'Information Géographique ont permis l'évaluation temporelle des tendances de développement urbain (Weber,1998. Reprenant cette idée, l'étude proposée tente d'identifier les formes de développement urbain de la Communauté urbaine de Strasbourg (CUS en tenant compte de l'occupation du sol, des caractéristiques des réseaux de communication, des réglementations urbaines et des contraintes environnementales qui pèsent sur la zone d'étude. L'état initial de l'occupation du sol, obtenu par traitements statistiques, est utilisé comme donnée d'entrée du modèle de potentiel afin d'obtenir des surfaces de potentiel associées à des caractéristiques spatiales spécifiques soit  : l'extension de la forme urbaine, la préservation des zones naturelles ou d'agricultures, ou encore les réglementations. Les résultats sont ensuite combinés et classés. Cette application a été menée pour confronter la méthode au développement réel de la CUS déterminé par une étude diachronique par comparaison d'images satellites (SPOT1986- SPOT1998. Afin de vérifier l'intérêt et la justesse de la méthode les résultats satellites ont été opposés à ceux issus de la classification des surfaces de potentiel. Les zones de développement identifiées en fonction du modèle de potentiel ont été confirmées par les résultats de l'analyse temporelle faite sur les images. Une différenciation de zones en

  2. Adaptive Networks Theory, Models and Applications

    CERN Document Server

    Gross, Thilo

    2009-01-01

    With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.

  3. Mobile Cloud Application Models Facilitated by the CPA†

    Directory of Open Access Journals (Sweden)

    Michael J. O’Sullivan

    2015-02-01

    Full Text Available This paper describes implementations of three mobile cloud applications, file synchronisation, intensive data processing, and group-based collaboration, using the Context Aware Mobile Cloud Services middleware, and the Cloud Personal Assistant. Both are part of the same mobile cloud project, actively developed and currently at the second version. We describe recent changes to the middleware, along with our experimental results of the three application models. We discuss challenges faced during the development of the middleware and their implications. The paper includes performance analysis of the CPA support for applications in respect to existing solutions where appropriate, and highlights the advantages of these applications with use-cases.

  4. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  5. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  6. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  7. Application of an analytical phase transformation model

    Institute of Scientific and Technical Information of China (English)

    LIU Feng; WANG Hai-feng; YANG Chang-lin; CHEN Zheng; YANG Wei; YANG Gen-cang

    2006-01-01

    Employing isothermal and isochronal differential scanning calorimetry, an analytical phase transformation model was used to study the kinetics of crystallization of amorphous Mg82.3Cu17.7 and Pd40Cu30P20Ni10 alloys. The analytical model comprised different combinations of various nucleation and growth mechanisms for a single transformation. Applying different combinations of nucleation and growth mechanisms, the nucleation and growth modes and the corresponding kinetic and thermodynamic parameters, have been determined. The influence of isothermal pre-annealing on subsequent isochronal crystallization kinetics with the increase of pre-annealing can be analyzed. The results show that the changes of the growth exponent, n, and the effective overall activation energy Q, occurring as function of the degree of transformation, do not necessarily imply a change of nucleation and growth mechanisms, i.e. such changes can occur while the transformation is isokinetic.

  8. Application of Digital Terrain Model to volcanology

    Directory of Open Access Journals (Sweden)

    V. Achilli

    2006-06-01

    Full Text Available Three-dimensional reconstruction of the ground surface (Digital Terrain Model, DTM, derived by airborne GPS photogrammetric surveys, is a powerful tool for implementing morphological analysis in remote areas. High accurate 3D models, with submeter elevation accuracy, can be obtained by images acquired at photo scales between 1:5000-1:20000. Multitemporal DTMs acquired periodically over volcanic area allow the monitoring of areas interested by crustal deformations and the evaluation of mass balance when large instability phenomena or lava flows have occurred. The work described the results obtained from the analysis of photogrammetric data collected over the Vulcano Island from 1971 to 2001. The data, processed by means of the Digital Photogrammetry Workstation DPW 770, provided DTM with accuracy ranging between few centimeters to few decimeters depending on the geometric image resolution, terrain configuration and quality of photographs.

  9. [Watershed water environment pollution models and their applications: a review].

    Science.gov (United States)

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  10. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  11. The Parton Model and its Applications

    CERN Document Server

    Yan, Tung-Mow

    2014-01-01

    This is a review of the program we started in 1968 to understand and generalize Bjorken scaling and Feynman's parton model in a canonical quantum field theory. It is shown that the parton model proposed for deep inelastic electron scatterings can be derived if a transverse momentum cutoff is imposed on all particles in the theory so that the impulse approximation holds. The deep inelastic electron-positron annihilation into a nucleon plus anything else is related by the crossing symmetry of quantum field theory to the deep inelastic electron-nucleon scattering. We have investigated the implication of crossing symmetry and found that the structure functions satisfy a scaling behavior analogous to the Bjorken limit for deep inelastic electron scattering. We then find that massive lepton pair production in collisions of two high energy hadrons can be treated by the parton model with an interesting scaling behavior for the differential cross sections. This turns out to be the first example of a class of hard proc...

  12. Voronoi cell patterns: Theoretical model and applications

    Science.gov (United States)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  13. Applications of GARCH models to energy commodities

    Science.gov (United States)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric

  14. Ocean modelling aspects for drift applications

    Science.gov (United States)

    Stephane, L.; Pierre, D.

    2010-12-01

    Nowadays, many authorities in charge of rescue-at-sea operations lean on operational oceanography products to outline research perimeters. Moreover, current fields estimated with sophisticated ocean forecasting systems can be used as input data for oil spill/ adrift object fate models. This emphasises the necessity of an accurate sea state forecast, with a mastered level of reliability. This work focuses on several problems inherent to drift modeling, dealing in the first place with the efficiency of the oceanic current field representation. As we want to discriminate the relevance of a particular physical process or modeling option, the idea is to generate series of current fields of different characteristics and then qualify them in term of drift prediction efficiency. Benchmarked drift scenarios were set up from real surface drifters data, collected in the Mediterranean sea and off the coasts of Angola. The time and space scales that we are interested in are about 72 hr forecasts (typical timescale communicated in case of crisis), for distance errors that we hope about a few dozen of km around the forecast (acceptable for reconnaissance by aircrafts) For the ocean prediction, we used some regional oceanic configurations based on the NEMO 2.3 code, nested into Mercator 1/12° operational system. Drift forecasts were computed offline with Mothy (Météo France oil spill modeling system) and Ariane (B. Blanke, 1997), a Lagrangian diagnostic tool. We were particularly interested in the importance of the horizontal resolution, vertical mixing schemes, and any processes that may impact the surface layer. The aim of the study is to ultimately point at the most suitable set of parameters for drift forecast use inside operational oceanic systems. We are also motivated in assessing the relevancy of ensemble forecasts regarding determinist predictions. Several tests showed that mis-described observed trajectories can finally be modelled statistically by using uncertainties

  15. Measurement-based load modeling: Theory and application

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.

  16. Computer-aided modelling template: Concept and application

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2015-01-01

    information and comments on model construction, storage and future use/reuse. The application of the tool is highlighted with a multi-scale modelling case study involving a catalytic membrane fixed bed reactor and a two-phase system for oxidation of unsaturated acid with hydrogen peroxide. Both case studies...

  17. Modelling for Bio-,Agro- and Pharma-Applications

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Singh, Ravendra; Cameron, Ian;

    2011-01-01

    approach for meso and microscale partial models. The specific case study of codeine release is examined. As a bio- application, a batch fermentation process is modelled. This involves the generation of a pre-cursor compound for insulin production.The plant involves a number of coupled unit operations...

  18. Models for Decision Making: From Applications to Mathematics... and Back

    OpenAIRE

    Crama, Yves

    2010-01-01

    In this inaugural lecture, I describe some facets of the interplay between mathematics and management science, economics, or engineering, as they come together in operations research models. I intend to illustrate, in particular, the complex and fruitful process through which fundamental combinatorial models find applications in management science, which in turn foster the development of new and challenging mathematical questions.

  19. MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    CHAHINE Georges L.; HSIAO Chao-Tsung

    2012-01-01

    Controlling mierobubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge,which can be achieved only through a combination of experimental and numerical/analytical techniques.The present communication presents a multi-physics approach to study the dynamics combining viscousinviseid effects,liquid and structure dynamics,and multi bubble interaction.While complex numerical tools are developed and used,the study aims at identifying the key parameters influencing the dynamics,which need to be included in simpler models.

  20. Terahertz metamaterials: design, implementation, modeling and applications

    Science.gov (United States)

    Hokmabadi, Mohammad P.; Balci, Soner; Kim, Juhyung; Philip, Elizabath; Rivera, Elmer; Zhu, Muliang; Kung, Patrick; Kim, Seongsin M.

    2016-04-01

    Sub-wavelength metamaterial structures are of great fundamental and practical interest because of their ability to manipulate the propagation of electromagnetic waves. We review here our recent work on the design, simulation, implementation and equivalent circuit modeling of metamaterial devices operating at Terahertz frequencies. THz metamaterials exhibiting plasmon-induced transparency are realized through the hybridization of double split ring resonators on either silicon or flexible polymer substrates and exhibiting slow light properties. THz metamaterials perfect absorbers and stereometamaterials are realized with multifunctional specifications such as broadband absorbing, switching, and incident light polarization selectivity.

  1. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  2. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...

  3. Solutions manual to accompany finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    A solutions manual to accompany Finite Mathematics: Models and Applications In order to emphasize the main concepts of each chapter, Finite Mathematics: Models and Applications features plentiful pedagogical elements throughout such as special exercises, end notes, hints, select solutions, biographies of key mathematicians, boxed key principles, a glossary of important terms and topics, and an overview of use of technology. The book encourages the modeling of linear programs and their solutions and uses common computer software programs such as LINDO. In addition to extensive chapters on pr

  4. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  5. NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN

    Directory of Open Access Journals (Sweden)

    Petr Chmátal

    2016-04-01

    Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.

  6. Application of Generic Disposal System Models

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.

  7. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann;

    2011-01-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i......) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview...... the number of studies using SDM to address paleobiology-related questions has increased considerably. While some of these studies only use SDM (23%), most combine them with genetically inferred patterns (49%), paleoecological records (22%), or both (6%). A large number of SDM-based studies have addressed...

  8. An overview of the optimization modelling applications

    Science.gov (United States)

    Singh, Ajay

    2012-10-01

    SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.

  9. Top-down enterprise application integration with reference models

    OpenAIRE

    Willem-Jan van den Heuvel; Wilhelm Hasselbring; Mike Papazoglou

    2000-01-01

    For Enterprise Resource Planning (ERP) systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference ...

  10. Numerical modeling of complex heat transfer phenomena in cooling applications

    OpenAIRE

    Hou, Xiaofei

    2015-01-01

    Multiphase and multicomponent flows are frequently encountered in the cooling applications due to combined heat transfer and phase change phenomena. Two-fluid and homogeneous mixture models are chosen to numerically study these flows in the cooling phenomena. Therefore this work is divided in two main parts. In the first part, a two-fluid model algorithm for free surface flows is presented. The two fluid model is usually used as a tool to simulate dispersed flow. With its extension, it may al...

  11. Copula bivariate probit models: with an application to medical expenditures

    OpenAIRE

    Winkelmann, Rainer

    2011-01-01

    The bivariate probit model is frequently used for estimating the eff*ect of an endogenous binary regressor (the "treatment") on a binary health outcome variable. This paper discusses simple modifi*cations that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank ...

  12. Dependent Risk Modelling and Ruin Probability: Numerical Computation and Applications

    OpenAIRE

    Zhao, Shouqi

    2014-01-01

    In this thesis, we are concerned with the finite-time ruin probabilities in two alternative dependent risk models, the insurance risk model and the dual risk model, including the numerical evaluation of the explicit expressions for these quantities and the application of the probabilistic results obtained. We first investigate the numerical properties of the formulas for the finite-time ruin probability derived by Ignatov and Kaishev (2000, 2004) and Ignatov et al. (2001) for a generalized in...

  13. Structural Equation Modeling: Theory and Applications in Forest Management

    OpenAIRE

    Tzeng Yih Lam; Douglas A. Maguire

    2012-01-01

    Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM). The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1) a path model repres...

  14. Dynamic reactor modeling with applications to SPR and ZEDNA.

    Energy Technology Data Exchange (ETDEWEB)

    Suo-Anttila, Ahti Jorma

    2011-12-01

    A dynamic reactor model has been developed for pulse-type reactor applications. The model predicts reactor power, axial and radial fuel expansion, prompt and delayed neutron population, and prompt and delayed gamma population. All model predictions are made as a function of time. The model includes the reactivity effect of fuel expansion on a dynamic timescale as a feedback mechanism for reactor power. All inputs to the model are calculated from first principles, either directly by solving systems of equations, or indirectly from Monte Carlo N-Particle Transport Code (MCNP) derived results. The model does not include any empirical parameters that can be adjusted to match experimental data. Comparisons of model predictions to actual Sandia Pulse Reactor SPR-III pulses show very good agreement for a full range of pulse magnitudes. The model is also applied to Z-pinch externally driven neutron assembly (ZEDNA) type reactor designs to model both normal and off-normal ZEDNA operations.

  15. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  16. Generalized Additive Modelling of Mixed Distribution Markov Models with Application to Melbourne's Rainfall.

    OpenAIRE

    Hyndman, R. J.; Grunwald, G. K.

    1999-01-01

    We consider modelling time series using a generalized additive model with first- order Markov structure and mixed transition density having a discrete component at zero and a continuous component with positive sample space. Such models have application, for example, in modelling daily occurrence and intensity of rainfall, and in modelling the number and size of insurance claims. We show how these methods extend the usual sinusoidal seasonal assumption in standard chain- dependent models by as...

  17. Solar radiation practical modeling for renewable energy applications

    CERN Document Server

    Myers, Daryl Ronald

    2013-01-01

    Written by a leading scientist with over 35 years of experience working at the National Renewable Energy Laboratory (NREL), Solar Radiation: Practical Modeling for Renewable Energy Applications brings together the most widely used, easily implemented concepts and models for estimating broadband and spectral solar radiation data. The author addresses various technical and practical questions about the accuracy of solar radiation measurements and modeling. While the focus is on engineering models and results, the book does review the fundamentals of solar radiation modeling and solar radiation m

  18. Top-Down Enterprise Application Integration with Reference Models

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2000-11-01

    Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

  19. Web Applications Security : A security model for client-side web applications

    OpenAIRE

    Prabhakara, Deepak

    2009-01-01

    The Web has evolved to support sophisticated web applications. These web applications are exposed to a number of attacks and vulnerabilities. The existing security model is unable to cope with these increasing attacks and there is a need for a new security model that not only provides the required security but also supports recent advances like AJAX and mashups. The attacks on client-side Web Applications can be attributed to four main reasons – 1) lack of a security context for Web Browsers...

  20. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  1. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    is missing, the atom connectivity based model is employed to predict the missing group interaction. In this way, a wide application range of the property modeling tool is ensured. Based on the property models, targeted computer-aided techniques have been developed for design and analysis of organic chemicals......, polymers, mixtures as well as separation processes. The presentation will highlight the framework (ICAS software) for property modeling, the property models and issues such as prediction accuracy, flexibility, maintenance and updating of the database. Also, application issues related to the use of property......Physical-chemical properties of pure chemicals and their mixtures play an important role in the design of chemicals based products and the processes that manufacture them. Although, the use of experimental data in design and analysis of chemicals based products and their processes is desirable...

  2. Hydraulic modeling development and application in water resources engineering

    Science.gov (United States)

    Simoes, Francisco J.; Yang, Chih Ted; Wang, Lawrence K.

    2015-01-01

    The use of modeling has become widespread in water resources engineering and science to study rivers, lakes, estuaries, and coastal regions. For example, computer models are commonly used to forecast anthropogenic effects on the environment, and to help provide advanced mitigation measures against catastrophic events such as natural and dam-break floods. Linking hydraulic models to vegetation and habitat models has expanded their use in multidisciplinary applications to the riparian corridor. Implementation of these models in software packages on personal desktop computers has made them accessible to the general engineering community, and their use has been popularized by the need of minimal training due to intuitive graphical user interface front ends. Models are, however, complex and nontrivial, to the extent that even common terminology is sometimes ambiguous and often applied incorrectly. In fact, many efforts are currently under way in order to standardize terminology and offer guidelines for good practice, but none has yet reached unanimous acceptance. This chapter provides a view of the elements involved in modeling surface flows for the application in environmental water resources engineering. It presents the concepts and steps necessary for rational model development and use by starting with the exploration of the ideas involved in defining a model. Tangible form of those ideas is provided by the development of a mathematical and corresponding numerical hydraulic model, which is given with a substantial amount of detail. The issues of model deployment in a practical and productive work environment are also addressed. The chapter ends by presenting a few model applications highlighting the need for good quality control in model validation.

  3. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  4. Theory and application of experimental model analysis in earthquake engineering

    Science.gov (United States)

    Moncarz, P. D.

    The feasibility and limitations of small-scale model studies in earthquake engineering research and practice is considered with emphasis on dynamic modeling theory, a study of the mechanical properties of model materials, the development of suitable model construction techniques and an evaluation of the accuracy of prototype response prediction through model case studies on components and simple steel and reinforced concrete structures. It is demonstrated that model analysis can be used in many cases to obtain quantitative information on the seismic behavior of complex structures which cannot be analyzed confidently by conventional techniques. Methodologies for model testing and response evaluation are developed in the project and applications of model analysis in seismic response studies on various types of civil engineering structures (buildings, bridges, dams, etc.) are evaluated.

  5. TASS Model Application for Testing the TDWAP Model

    Science.gov (United States)

    Switzer, George F.

    2009-01-01

    One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.

  6. Studying and modelling variable density turbulent flows for industrial applications

    International Nuclear Information System (INIS)

    Industrial applications are presented in the various fields of interest for EDF. A first example deals with transferred electric arcs couplings flow and thermal transfer in the arc and in the bath of metal and is related with applications of electricity. The second one is the combustion modelling in burners of fossil power plants. The last one comes from the nuclear power plants and concerns the stratified flows in a nuclear reactor building. (K.A.)

  7. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  8. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  9. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  10. Modelling of dynamical systems in transportation using the modyfit application

    Directory of Open Access Journals (Sweden)

    S. Żółkiewski

    2008-05-01

    Full Text Available Purpose: of this paper is to present a numerical application for analysis and modelling dynamical flexible systems in transportation. This application enables controlling and regulation of rotating systems with the interaction between the working motion and local vibrations of elements.Design/methodology/approach: Numerical calculations are based onto mathematical models derived in other publications. The objectives of making this application were connected with emerging wants of analyzing and modelling rotating systems with taking into consideration relation between main and local motions. Theoretical considerations were made by classical methods and by the Galerkin’s method.Findings: In way of increasing the value of angular velocity we can observe creating additional poles in the characteristic of dynamical flexibility and after increasing it is evident that created modes are symmetrically propagated from the original mode. It is evident, instead of modes there are created zeros.Research limitations/implications: Analyzed systems were limited to simple linear type beams and rods. Main motion is plane motion. Future research should consider complex systems and nonlinearity.Practical implications: of the application are possibilities of numerical analysis of beam and rod systems both the free-free ones and fixed ones. Engineers thank to this application can derived the stability zones of analyzed systems and can observe eigenfrequencies and zeros in the way of changing the value of angular velocity. In practice we should implement more adequate models such as those presented in this paper.Originality/value: This paper consist the description of the application called the Modyfit. The Modyfit is an implementation of derived models in a numerical environment. Those models are rotating flexible systems with consideration the transportation effect.

  11. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  12. Animal models of enterovirus 71 infection: applications and limitations.

    Science.gov (United States)

    Wang, Ya-Fang; Yu, Chun-Keung

    2014-01-01

    Human enterovirus 71 (EV71) has emerged as a neuroinvasive virus that is responsible for several outbreaks in the Asia-Pacific region over the past 15 years. Appropriate animal models are needed to understand EV71 neuropathogenesis better and to facilitate the development of effective vaccines and drugs. Non-human primate models have been used to characterize and evaluate the neurovirulence of EV71 after the early outbreaks in late 1990s. However, these models were not suitable for assessing the neurovirulence level of the virus and were associated with ethical and economic difficulties in terms of broad application. Several strategies have been applied to develop mouse models of EV71 infection, including strategies that employ virus adaption and immunodeficient hosts. Although these mouse models do not closely mimic human disease, they have been applied to determine the pathogenesis of and treatment and prevention of the disease. EV71 receptor-transgenic mouse models have recently been developed and have significantly advanced our understanding of the biological features of the virus and the host-parasite interactions. Overall, each of these models has advantages and disadvantages, and these models are differentially suited for studies of EV71 pathogenesis and/or the pre-clinical testing of antiviral drugs and vaccines. In this paper, we review the characteristics, applications and limitation of these EV71 animal models, including non-human primate and mouse models. PMID:24742252

  13. Recent Applications of Mesoscale Modeling to Nanotechnology and Drug Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Maiti, A; Wescott, J; Kung, P; Goldbeck-Wood, G

    2005-02-11

    Mesoscale simulations have traditionally been used to investigate structural morphology of polymer in solution, melts and blends. Recently we have been pushing such modeling methods to important areas of Nanotechnology and Drug delivery that are well out of reach of classical molecular dynamics. This paper summarizes our efforts in three important emerging areas: (1) polymer-nanotube composites; (2) drug diffusivity through cell membranes; and (3) solvent exchange in nanoporous membranes. The first two applications are based on a bead-spring-based approach as encoded in the Dissipative Particle Dynamics (DPD) module. The last application used density-based Mesoscale modeling as implemented in the Mesodyn module.

  14. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  15. WWW Business Applications Based on the Cellular Model

    Institute of Scientific and Technical Information of China (English)

    Toshio Kodama; Tosiyasu L. Kunii; Yoichi Seki

    2008-01-01

    A cellular model based on the Incrementally Modular Abstraction Hierarchy (IMAH) is a novel model that can represent the architecture of and changes in cyberworlds, preserving invariants from a general level to a specific one. We have developed a data processing system called the Cellular Data System (CDS). In the development of business applications, you can prevent combinatorial explosion in the process of business design and testing by using CDS. In this paper, we have first designed and implemented wide-use algebra on the presentation level. Next, we have developed and verified the effectiveness of two general business applications using CDS: 1) a customer information management system, and 2) an estimate system.

  16. Copula bivariate probit models: with an application to medical expenditures.

    Science.gov (United States)

    Winkelmann, Rainer

    2012-12-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. PMID:22025413

  17. Handbook of Real-World Applications in Modeling and Simulation

    CERN Document Server

    Sokolowski, John A

    2012-01-01

    This handbook provides a thorough explanation of modeling and simulation in the most useful, current, and predominant applied areas, such as transportation, homeland security, medicine, operational research, military science, and business modeling.  The authors offer a concise look at the key concepts and techniques of modeling and simulation and then discuss how and why the presented domains have become leading applications.  The book begins with an introduction of why modeling and simulation is a reliable analysis assessment tool for complex syste

  18. Applicability of cooperative learning model in gastronomy education

    OpenAIRE

    SARIOĞLAN, Mehmet; CEVİZKAYA, Gülhan

    2016-01-01

    The purpose of the study is to reveal of “Cooperative learning model’s applicability which is one of the vital models of gastronomy. Learning model that is based on cooperativisim, have importance for students in terms of being successful in their group Works at gastronomy education. This study divides into two parts, one is “literature” and other is “model proposal”. At scanning of “literature” is going to be focused on cooperative learning model gastronomy education’s description. In the se...

  19. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  20. Optimization of Process Parameters During Drilling of Glass-Fiber Polyester Reinforced Composites Using DOE and ANOVA

    Directory of Open Access Journals (Sweden)

    N.S. Mohan

    2010-09-01

    Full Text Available Polymer-based composite material possesses superior properties such as high strength-to-weight ratio, stiffness-to-weight ratio and good corrosive resistance and therefore, is attractive for high performance applications such as in aerospace, defense and sport goods industries. Drilling is one of the indispensable methods for building products with composite panels. Surface quality and dimensional accuracy play an important role in the performance of a machined component. In machining processes, however, the quality of the component is greatly influenced by the cutting conditions, tool geometry, tool material, machining process, chip formation, work piece material, tool wear and vibration during cutting. Drilling tests were conducted on glass fiber reinforced plastic composite [GFRP] laminates using an instrumented CNC milling center. A series of experiments are conducted using TRIAC VMC CNC machining center to correlate the cutting parameters and material parameters on the cutting thrust, torque and surface roughness. The measured results were collected and analyzed with the help of the commercial software packages MINITAB14 and Taly Profile. The surface roughness of the drilled holes was measured using Rank Taylor Hobson Surtronic 3+ instrument. The method could be useful in predicting thrust, torque and surface roughness parameters as a function of process variables. The main objective is to optimize the process parameters to achieve low cutting thrust, torque and good surface roughness. From the analysis it is evident that among all the significant parameters, speed and drill size have significant influence cutting thrust and drill size and specimen thickness on the torque and surface roughness. It was also found that feed rate does not have significant influence on the characteristic output of the drilling process.

  1. Constitutive Modeling of Geomaterials Advances and New Applications

    CERN Document Server

    Zhang, Jian-Min; Zheng, Hong; Yao, Yangping

    2013-01-01

    The Second International Symposium on Constitutive Modeling of Geomaterials: Advances and New Applications (IS-Model 2012), is to be held in Beijing, China, during October 15-16, 2012. The symposium is organized by Tsinghua University, the International Association for Computer Methods and Advances in Geomechanics (IACMAG), the Committee of Numerical and Physical Modeling of Rock Mass, Chinese Society for Rock Mechanics and Engineering, and the Committee of Constitutive Relations and Strength Theory, China Institution of Soil Mechanics and Geotechnical Engineering, China Civil Engineering Society. This Symposium follows the first successful International Workshop on Constitutive Modeling held in Hong Kong, which was organized by Prof. JH Yin in 2007.   Constitutive modeling of geomaterials has been an active research area for a long period of time. Different approaches have been used in the development of various constitutive models. A number of models have been implemented in the numerical analyses of geote...

  2. A PROPOSED HYBRID AGILE FRAMEWORK MODEL FOR MOBILE APPLICATIONS DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Ammar Khader Almasri

    2016-03-01

    Full Text Available With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.

  3. 12th Workshop on Stochastic Models, Statistics and Their Applications

    CERN Document Server

    Rafajłowicz, Ewaryst; Szajowski, Krzysztof

    2015-01-01

    This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

  4. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  5. A Comparison of Three Programming Models for Adaptive Applications

    Science.gov (United States)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  6. Improved dual sided doped memristor: modelling and applications

    OpenAIRE

    Anup Shrivastava; Muhammad Khalid; Komal Singh; Jawar Singh

    2014-01-01

    Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better res...

  7. Monte Carlo methods and applications for the nuclear shell model

    OpenAIRE

    Dean, D. J.; White, J A

    1998-01-01

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  8. Application of dimensional analysis in systems modeling and control design

    CERN Document Server

    Balaguer, Pedro

    2013-01-01

    Dimensional analysis is an engineering tool that is widely applied to numerous engineering problems, but has only recently been applied to control theory and problems such as identification and model reduction, robust control, adaptive control, and PID control. Application of Dimensional Analysis in Systems Modeling and Control Design provides an introduction to the fundamentals of dimensional analysis for control engineers, and shows how they can exploit the benefits of the technique to theoretical and practical control problems.

  9. Nonlinear Mathematical Modeling in Pneumatic Servo Position Applications

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Valdiero

    2011-01-01

    Full Text Available This paper addresses a new methodology for servo pneumatic actuators mathematical modeling and selection from the dynamic behavior study in engineering applications. The pneumatic actuator is very common in industrial application because it has the following advantages: its maintenance is easy and simple, with relatively low cost, self-cooling properties, good power density (power/dimension rate, fast acting with high accelerations, and installation flexibility. The proposed fifth-order nonlinear mathematical model represents the main characteristics of this nonlinear dynamic system, as servo valve dead zone, air flow-pressure relationship through valve orifice, air compressibility, and friction effects between contact surfaces in actuator seals. Simulation results show the dynamic performance for different pneumatic cylinders in order to see which features contribute to a better behavior of the system. The knowledge of this behavior allows an appropriate choice of pneumatic actuator, mainly contributing to the success of their precise control in several applications.

  10. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  11. On the applicability of models for outdoor sound

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as Parabolic Equation methods, FFP methods and Boundary Element Methods all have advantages in certain situations. These approaches are accurate and predict...

  12. On the applicability of models for outdoor sound (A)

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as parabolic equation methods, FFP methods, and boundary element methods all have advantages in certain situations. These approaches are accurate and predict...

  13. Application of Active Contour Model in Tracking Sequential Nearshore Waves

    Institute of Scientific and Technical Information of China (English)

    Yu-Hung HSIAO; Min-Chih HUANG

    2009-01-01

    In the present study,a generalized active contour model of gradient vector flow is combined with the video techniques of Argus system to delineate and track sequential nearshore wave crest profdes in the shoaling process,up to their breaking on the shorehne.Previous applications of active contour models to water wave problems are limited to controllable wave tank experiments.By contrast,our application in this study is in a nearshore field environment where oblique images obtained under natural and varying condition of ambient light are employed.Existing Argus techniques produce plane image data or time series data from a selected small subset of discrete pixels.By contrast,the active contour model produces line image data along continuous visible curves such as wave crest profdes.The combination of these two existing techniques,the active contour model and Argus methodologies,facilitates the estimates of the direction wave field and phase speeds within the whole area covered by camera.These estimates are useful for the purpose of inverse calculation of the water depth.Applications of the present techniques to Hsi-tzu bay where a beach restoration program is currently undertaken are illustrated.This extension of Argus video techniques provides new application of optical remote sensing to study the hydrodynamics and morphology of a nearshore environment.

  14. Risk Measurement and Risk Modelling using Applications of Vine Copulas

    NARCIS (Netherlands)

    D.E. Allen (David); M.J. McAleer (Michael); A.K. Singh (Abhay)

    2014-01-01

    markdownabstract__abstract__ This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite nancial risk. Copula-based dependence modelling is a popular tool in nancial applica

  15. Application of nonlinear tyre models to analyse shimmy

    Science.gov (United States)

    Ran, Shenhai; Besselink, I. J. M.; Nijmeijer, H.

    2014-05-01

    This paper focuses on the application of different tyre models to analyse the shimmy phenomenon. Tyre models with the Magic Formula and a non-constant relaxation length are introduced. The energy flow method is applied to compare these tyre models. A trailing wheel suspension is used to analyse shimmy stability and to evaluate the differences between tyre models. Linearisation and nonlinear techniques, including bifurcation analysis, are applied to analyse this system. Extending the suspension model with lateral flexibility and structural damping reveals more information on shimmy stability. Although the nonlinear tyre models do not change the stability of equilibria, they determine the magnitude of the oscillation. It is concluded that the non-constant relaxation length should be included in the shimmy analysis for more accurate results at large amplitude.

  16. Analysis and Application for Integrity Model on Trusted Platform

    Institute of Scientific and Technical Information of China (English)

    TU Guo-qing; ZHANG Huan-guo; WANG Li-na; YU Dan-dan

    2005-01-01

    To build a trusted platform based on Trusted Computing Platform Alliance (TCPA) ' s recommendation,we analyze the integrity mechanism for such a PC platform in this paper. By combinning access control model with information flow model, we put forward a combined process-based lattice model to enforce security. This model creates a trust chain by which we can manage a series of processes from a core root of trust module to some other application modules.In the model, once the trust chain is created and managed correctly,the integrity of the computer's hardware and sofware has been mainfained, so does the confidentiality and authenticity. Moreover, a relevant implementation of the model is explained.

  17. The Application of a Small Strain Model in Excavations

    Institute of Scientific and Technical Information of China (English)

    XUAN Feng; XIA Xiao-he; WANG Jian-hua

    2009-01-01

    The importance of soil small strain effect on soil-structure behavior was investigated by researchers in last decades. The finite element method (FEM) is always used to predict the excavation behavior, whereas there are not many soil models available to consider this effect in analysis. This paper introduces a simple small strain soil model-hardening small-strain (HSS) in PLAXIS 8.5 and exhibits its application in excavation problems via studying the history of two cases. The analyses also use two familiar soil models: hardening-soil (HS) model and Mohr-Coulomb (MC) model. Results show that the HSS predicts more reasonable magnitudes and profiles of wall deflections and surface settlements than other models. It also indicates that the small strain effect relies on the strain level which is induced by excavation.

  18. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra;

    2016-01-01

    We utilize the close link between Cohen's d, the effect size in an ANOVA framework, and the Thurstonian (Signal detection) d-prime to suggest better visualizations and interpretations of standard sensory and consumer data mixed model ANOVA results. The basic and straightforward idea is to interpret...... inherently challenging effect size measure estimates in ANOVA settings....

  19. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  20. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  1. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans;

    2004-01-01

    The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design...... uses a standard k-ε turbulence model. Simulations comprise both steady-state and dynamic approaches. Several boundary conditions are tested. Finally, the paper discusses the prospects of simple CFD models in smoke ventilation design including the inherent limitations....

  2. Model-Driven Approach for Body Area Network Application Development

    OpenAIRE

    Algimantas Venčkauskas; Vytautas Štuikys; Nerijus Jusas; Renata Burbaitė

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earlie...

  3. Costs equations for cost modeling: application of ABC Matrix

    Directory of Open Access Journals (Sweden)

    Alex Fabiano Bertollo Santana

    2016-03-01

    Full Text Available This article aimed at providing an application of the ABC Matrix model - a management tool that models processes and activities. The ABC Matrix is based on matrix multiplication, using a fast algorithm for the development of costing systems and the subsequent translation of the costs in cost equations and systems. The research methodology is classified as a case study, using the simulation data to validate the model. The conclusion of the research is that the algorithm presented is an important development, because it is an effective approach to calculating the product cost and because it provides simple and flexible algorithm design software for controlling the cost of products

  4. Application of thermospheric general circulation models for space weather operations

    Science.gov (United States)

    Fuller-Rowell, T.; Minter, C.; Codrescu, M.

    Solar irradiance is the dominant source of heat, ionization, and dissociation of the thermosphere, and to a large extent drives the global dynamics, and controls the neutral composition and density structure. Neutral composition is important for space weather applications because of its impact on ionospheric loss rates, and neutral density is critical for satellite drag prediction. The future for thermospheric general circulation models for space weather operations lies in their use as state propagators in data assimilation techniques. The physical models can match empirical models in accuracy provided accurate drivers are available, but their true value comes when combined with data in an optimal way. Two such applications have recently been developed. The first utilizes a Kalman filter to combine space-based observation of airglow with physical model predictions to produce global maps of neutral composition. The output of the filter will be used within the GAIM (Global Assimilation of Ionospheric Measurement) model developed under a parallel effort. The second filter uses satellite tracking and remote sensing data for specification of neutral density. Both applications rely on accurate estimates of the solar EUV and magnetospheric drivers.

  5. The Application of the Jerome Model and the Horace Model in Translation Practice

    Institute of Scientific and Technical Information of China (English)

    WU Jiong

    2015-01-01

    The Jerome model and the Horace model have a great influence on translation theories and practice from ancient times. This paper starts from a comparative study of the two models, and mainly discusses similarities, differences and weakness of them. And then, through the case study, it analyzes the application of the two models to English-Chinese translation. In the end, it draws a conclusion that generally accepted translation criterion does not exist, different types of texts require different transla⁃tion criterion.

  6. Systems and models with anticipation in physics and its applications

    Science.gov (United States)

    Makarenko, A.

    2012-11-01

    Investigations of recent physics processes and real applications of models require the new more and more improved models which should involved new properties. One of such properties is anticipation (that is taking into accounting some advanced effects).It is considered the special kind of advanced systems - namely a strong anticipatory systems introduced by D. Dubois. Some definitions, examples and peculiarities of solutions are described. The main feature is presumable multivaluedness of the solutions. Presumable physical examples of such systems are proposed: self-organization problems; dynamical chaos; synchronization; advanced potentials; structures in micro-, meso- and macro- levels; cellular automata; computing; neural network theory. Also some applications for modeling social, economical, technical and natural systems are described.

  7. Model Oriented Application Generation for Industrial Control Systems

    CERN Document Server

    Copy, B; Blanco Vinuela, E; Fernandez Adiego, B; Nogueira Ferandes, R; Prieto Barreiro, I

    2011-01-01

    The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology and a collection of development tools that standardizes the design of industrial control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to ease extensibility and maintenance of the framework, introducing a stable metamodel, a set of platformindependent models and platformspecific configurations against which code generation plugins and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider and SIEMENS PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UNICOS metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuratio...

  8. Genome Editing and Its Applications in Model Organisms

    Directory of Open Access Journals (Sweden)

    Dongyuan Ma

    2015-12-01

    Full Text Available Technological advances are important for innovative biological research. Development of molecular tools for DNA manipulation, such as zinc finger nucleases (ZFNs, transcription activator-like effector nucleases (TALENs, and the clustered regularly-interspaced short palindromic repeat (CRISPR/CRISPR-associated (Cas, has revolutionized genome editing. These approaches can be used to develop potential therapeutic strategies to effectively treat heritable diseases. In the last few years, substantial progress has been made in CRISPR/Cas technology, including technical improvements and wide application in many model systems. This review describes recent advancements in genome editing with a particular focus on CRISPR/Cas, covering the underlying principles, technological optimization, and its application in zebrafish and other model organisms, disease modeling, and gene therapy used for personalized medicine.

  9. APPLICATION OF REGRESSION MODELLING TECHNIQUES IN DESALINATION OF SEA WATER BY MEMBRANE DISTILLATION

    Directory of Open Access Journals (Sweden)

    SELVI S. R

    2015-08-01

    Full Text Available The objective of this work is to gain an idea about the statistical significance of experimental parameters on the performance of membrane distillation. In this work the raw sea water sample without pretreatment was collected from Puducherry and desalinated using direct contact membrane distillation method. Experimental data analysis was carried out using statistical methods. The experimental data involves the effects of feed temperature, feed flow rate and feed concentration on the permeate flux. In statistical methods, regression model was developed to correlate the significance of input parameters like feed temperature, feed concentration and feed flow rate with the output parameter like permeate flux in the process of membrane distillation. Since the performance of the membrane distillation in the desalination of water is characterised by permeate flux, regression model using simple linear method was carried out. Goodness of model fitting should always has to be validated. Regression model was validated using ANOVA. Estimates of ANOVA for the parameter study was given and the coefficient obtained by regression analysis was specified in the regression equation and concluded that the highest coefficient of input parameter is significant, highly influences the response. Feed flow rate and feed temperature has higher influence on permeate flux than that of feed concentration. The coefficient of feed concentration was found to be negative which indicates less significant factor on permeate flux. The chemical composition of sea water was given by water quality analysis . TDS of membrane distilled water was found to be 18ppm than the initial feed TDS of sea water 27,720 ppm. From the experimental work it was found, salt rejection as 99% and water analysis report confirms the quality of distillate obtained by this desalination process as potable water.

  10. Model-Driven Development of Automation and Control Applications: Modeling and Simulation of Control Sequences

    Directory of Open Access Journals (Sweden)

    Timo Vepsäläinen

    2014-01-01

    Full Text Available The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process.

  11. Application distribution model and related security attacks in VANET

    Science.gov (United States)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  12. Medical applications of model-based dynamic thermography

    Science.gov (United States)

    Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech

    2001-03-01

    The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.

  13. Monte Carlo modelling of positron transport in real world applications

    International Nuclear Information System (INIS)

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  14. Monte Carlo modelling of positron transport in real world applications

    Science.gov (United States)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  15. Application of Metamodels to Identification of Metallic Materials Models

    Directory of Open Access Journals (Sweden)

    Maciej Pietrzyk

    2016-01-01

    Full Text Available Improvement of the efficiency of the inverse analysis (IA for various material tests was the objective of the paper. Flow stress models and microstructure evolution models of various complexity of mathematical formulation were considered. Different types of experiments were performed and the results were used for the identification of models. Sensitivity analysis was performed for all the models and the importance of parameters in these models was evaluated. Metamodels based on artificial neural network were proposed to simulate experiments in the inverse solution. Performed analysis has shown that significant decrease of the computing times could be achieved when metamodels substitute finite element model in the inverse analysis, which is the case in the identification of flow stress models. Application of metamodels gave good results for flow stress models based on closed form equations accounting for an influence of temperature, strain, and strain rate (4 coefficients and additionally for softening due to recrystallization (5 coefficients and for softening and saturation (7 coefficients. Good accuracy and high efficiency of the IA were confirmed. On the contrary, identification of microstructure evolution models, including phase transformation models, did not give noticeable reduction of the computing time.

  16. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    Transport models are becoming more and more disaggregate to facilitate a realistic representation of individuals and their travel patterns. In line with this development, the PhD study focuses on facilitating the deployment of traffic assignment models in fully disaggregate activity-based model...... focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources......-perceptions in the choice set generation for complex multi-modal networks, and (iv) addressing the difficulty of choice set generation by making available a theoretical framework, and corresponding operational solution methods, which consistently distinguishes between used and unused paths. The availability of data...

  17. Models of pressure compaction and their application for wheat meal

    Science.gov (United States)

    Skonecki, Stanisław; Kulig, Ryszard; Łysiak, Grzegorz

    2014-03-01

    Processes of compaction of granular materials were described using selected models. The analysis of their accuracy on the example of wheat was the basis for the discussion on their applicability to the processing of plant-origin materials. Parameters of the model equations for wheat, compressed at 10-18% moisture content were calculated, and the relations between these parameters and wheat moisture were determined. It was found that the analyzed models described the pressure compaction of granular plant material with different accuracy, and were highly dependent on moisture. The study also indicated that the model of Ferrero et al. fits the experimental results well. The parameters of this model reflected very well the physical phenomena which occur during compression.

  18. Structural Equation Modeling: Theory and Applications in Forest Management

    Directory of Open Access Journals (Sweden)

    Tzeng Yih Lam

    2012-01-01

    Full Text Available Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM. The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1 a path model representing the effects of successively higher layers of vegetation on late-seral herbs through processes such as light attenuation and (2 a measurement model accounting for measurement errors. The fitted SEM model suggested a direct negative effect of light attenuation on late-seral herbs cover but a direct positive effect of northern aspect. Moreover, many processes have indirect effects mediated through midstory vegetation. SEM is recommended as a forest management tool for designing silvicultural treatments and systems for attaining complex arrays of management objectives.

  19. Modelling of a cross flow evaporator for CSP application

    DEFF Research Database (Denmark)

    Sørensen, Kim; Franco, Alessandro; Pelagotti, Leonardo;

    2016-01-01

    Heat exchangers consisting of bundles of horizontal plain tubes with boiling on the shell side are widely used in industrial and energy systems applications. A recent particular specific interest for the use of this special heat exchanger is in connection with Concentrated Solar Power (CSP...... for a coil type steam generator specifically designed for solar applications, this paper analyzes the use of several heat transfer, void fraction and pressure drop correlations for the modelling the operation of such a type of steam generator. The paper after a brief review of the literature about...

  20. Overview on available animal models for application in leukemia research

    International Nuclear Information System (INIS)

    The term ''leukemia'' encompasses a group of diseases with a variable clinical and pathological presentation. Its cellular origin, its biology and the underlying molecular genetic alterations determine the very variable and individual disease phenotype. The focus of this review is to discuss the most important guidelines to be taken into account when we aim at developing an ''ideal'' animal model to study leukemia. The animal model should mimic all the clinical, histological and molecular genetic characteristics of the human phenotype and should be applicable as a clinically predictive model. It should achieve all the requirements to be used as a standardized model adaptive to basic research as well as to pharmaceutical practice. Furthermore it should fulfill all the criteria to investigate environmental risk factors, the role of genomic mutations and be applicable for therapeutic testing. These constraints limit the usefulness of some existing animal models, which are however very valuable for basic research. Hence in this review we will primarily focus on genetically engineered mouse models (GEMMs) to study the most frequent types of childhood leukemia. GEMMs are robust models with relatively low site specific variability and which can, with the help of the latest gene modulating tools be adapted to individual clinical and research questions. Moreover they offer the possibility to restrict oncogene expression to a defined target population and regulate its expression level as well as its timely activity. Until recently it was only possible in individual cases to develop a murin model, which fulfills the above mentioned requirements. Hence the development of new regulatory elements to control targeted oncogene expression should be priority. Tightly controlled and cell specific oncogene expression can then be combined with a knock-in approach and will depict a robust murine model, which enables almost physiologic oncogene

  1. Modelling of transport processes in porous media for energy applications

    Energy Technology Data Exchange (ETDEWEB)

    Kangas, M.

    1996-12-31

    Flows in porous media are encountered in many branches of technology. In these phenomena, a fluid of some sort is flowing through porous matrix of a solid medium. Examples of the fluid are water, air, gas and oil. The solid matrix can be soil, fissured rock, ceramics, filter paper, etc. The flow is in many cases accompanied by transfer of heat or solute within the fluid or between the fluid and the surrounding solid matrix. Chemical reactions or microbiological processes may also be taking place in the system. In this thesis, a 3-dimensional computer simulation model THETA for the coupled transport of fluid, heat, and solute in porous media has been developed and applied to various problems in the field of energy research. Although also applicable to porous medium applications in general, the version of the model described and used in this work is intended for studying the transport processes in aquifers, which are geological formations containing groundwater. The model highlights include versatile input and output routines, as well as modularity which, for example, enables an easy adaptation of the model for use as a subroutine in large energy system simulations. Special attention in the model development has been attached to high flow conditions, which may be present in Nordic esker aquifers located close to the ground surface. The simulation model has been written with FORTRAN 77 programming language, enabling a seamless operation both in PC and main frame environments. For PC simulation, a special graphic user interface has been developed. The model has been used with success in a wide variety of applications, ranging from basic thermal analyses to thermal energy storage system evaluations and nuclear waste disposal simulations. The studies have shown that thermal energy storage is feasible also in Nordic high flow aquifers, although at the cost of lower recovery temperature level, usually necessitating the use of heat pumps. In the nuclear waste studies, it

  2. High-fidelity geometric modeling for biomedical applications

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zeyun [Univ. of California, San Diego, CA (United States). Dept. of Mathematics; Holst, Michael J. [Univ. of California, San Diego, CA (United States). Dept. of Mathematics; Andrew McCammon, J. [Univ. of California, San Diego, CA (United States). Dept. of Chemistry and Biochemistry; Univ. of California, San Diego, CA (United States). Dept. of Pharmacology

    2008-05-19

    In this paper, we describe a combination of algorithms for high-fidelity geometric modeling and mesh generation. Although our methods and implementations are application-neutral, our primary target application is multiscale biomedical models that range in scales across the molecular, cellular, and organ levels. Our software toolchain implementing these algorithms is general in the sense that it can take as input a molecule in PDB/PQR forms, a 3D scalar volume, or a user-defined triangular surface mesh that may have very low quality. The main goal of our work presented is to generate high quality and smooth surface triangulations from the aforementioned inputs, and to reduce the mesh sizes by mesh coarsening. Tetrahedral meshes are also generated for finite element analysis in biomedical applications. Experiments on a number of bio-structures are demonstrated, showing that our approach possesses several desirable properties: feature-preservation, local adaptivity, high quality, and smoothness (for surface meshes). Finally, the availability of this software toolchain will give researchers in computational biomedicine and other modeling areas access to higher-fidelity geometric models.

  3. Language Model Applications to Spelling with Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Anderson Mora-Cortes

    2014-03-01

    Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  4. Intelligent control based on intelligent characteristic model and its application

    Institute of Scientific and Technical Information of China (English)

    吴宏鑫; 王迎春; 邢琰

    2003-01-01

    This paper presents a new intelligent control method based on intelligent characteristic model for a kind of complicated plant with nonlinearities and uncertainties, whose controlled output variables cannot be measured on line continuously. The basic idea of this method is to utilize intelligent techniques to form the characteristic model of the controlled plant according to the principle of combining the char-acteristics of the plant with the control requirements, and then to present a new design method of intelli-gent controller based on this characteristic model. First, the modeling principles and expression of the intelligent characteristic model are presented. Then based on description of the intelligent characteristic model, the design principles and methods of the intelligent controller composed of several open-loops and closed-loops sub controllers with qualitative and quantitative information are given. Finally, the ap-plication of this method in alumina concentration control in the real aluminum electrolytic process is in-troduced. It is proved in practice that the above methods not only are easy to implement in engineering design but also avoid the trial-and-error of general intelligent controllers. It has taken better effect in the following application: achieving long-term stable control of low alumina concentration and increasing the controlled ratio of anode effect greatly from 60% to 80%.

  5. A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications

    CERN Document Server

    Wang, Jian-xun; Xiao, Heng

    2015-01-01

    Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...

  6. MOGO: Model-Oriented Global Optimization of Petascale Applications

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  7. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  8. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto;

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen...

  9. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  10. Data Warehouse Model For Mobile-Based Applications

    Directory of Open Access Journals (Sweden)

    Muhammad Shahbani Abu Bakar

    2016-06-01

    Full Text Available Analysis and design are very important roles in the Data Warehouse (DW system development and forms as a backbone of any successful or failure of the DW project. The emerging trends of analytic-based application required the DW system to be implemented in the mobile environment. However, current analysis and design approaches are based on existing DW environments that focusing on the deployment of the DW system in traditional web-based applications. This will create the limitations on user accessed and the used of analytical information by the decision makers. Consequently, this will prolong the adoption of analytic-based applications to the users and organizations. This research aims to suggest an approach for modeling the DW and design the DW system on the mobile environments. A variant dimension of modeling techniques was used to enhance the DW schemas in order to accommodate the requirements of mobile characteristics in the DW design. A proposed mobile DW system was evaluated by expert review, and support the success of mobile DW-based application implementation

  11. Force modeling for incision surgery into tissue with haptic application

    Science.gov (United States)

    Kim, Pyunghwa; Kim, Soomin; Choi, Seung-Hyun; Oh, Jong-Seok; Choi, Seung-Bok

    2015-04-01

    This paper presents a novel force modeling for an incision surgery into tissue and its haptic application for a surgeon. During the robot-assisted incision surgery, it is highly urgent to develop the haptic system for realizing sense of touch in the surgical area because surgeons cannot sense sensations. To achieve this goal, the force modeling related to reaction force of biological tissue is proposed in the perspective on energy. The force model describes reaction force focused on the elastic feature of tissue during the incision surgery. Furthermore, the force is realized using calculated information from the model by haptic device using magnetorheological fluid (MRF). The performance of realized force that is controlled by PID controller with open loop control is evaluated.

  12. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  13. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  14. APPLICATION OF VARIABLE-FIDELITY MODELS TO AERODYNAMIC OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    XIA Lu; GAO Zheng-hong

    2006-01-01

    For aerodynamic shape optimization, the approximation management framework (AMF) method is used to organize and manage the variable-fidelity models. The method can take full advantage of the low-fidelity, cheaper models to concentrate the main workload on the low-fidelity models in optimization iterative procedure. Furthermore, it can take high-fidelity, more expensive models to monitor the procedure to make the method globally convergent to a solution of high-fidelity problem. Finally, zero order variable-fidelity aerodynamic optimization management framework and search algorithm are demonstrated on an airfoil optimization of UAV with a flying wing. Compared to the original shape, the aerodynamic performance of the optimal shape is improved. The results show the method has good feasibility and applicability.

  15. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  16. Systems Engineering Model and Training Application for Desktop Environment

    Science.gov (United States)

    May, Jeffrey T.

    2010-01-01

    Provide a graphical user interface based simulator for desktop training, operations and procedure development and system reference. This simulator allows for engineers to train and further understand the dynamics of their system from their local desktops. It allows the users to train and evaluate their system at a pace and skill level based on the user's competency and from a perspective based on the user's need. The simulator will not require any special resources to execute and should generally be available for use. The interface is based on a concept of presenting the model of the system in ways that best suits the user's application or training needs. The three levels of views are Component View, the System View (overall system), and the Console View (monitor). These views are portals into a single model, so changing the model from one view or from a model manager Graphical User Interface will be reflected on all other views.

  17. Application of mesoscale modeling optimization to development of advanced materials

    Institute of Scientific and Technical Information of China (English)

    SONG Xiaoyan

    2004-01-01

    The rapid development of computer modeling in recent years offers opportunities for materials preparation in a more economic and efficient way. In the present paper, a practicable route for research and development of advanced materials by applying the visual and quantitative modeling technique on the mesoscale is introduced. A 3D simulation model is developed to describe the microstructure evolution during the whole process of deformation, recrystallization and grain growth in a material containing particles. In the light of simulation optimization, the long-term stabilized fine grain structures ideal for high-temperature applications are designed and produced. In addition, the feasibility, reliability and prospects of material development based on mesoscale modeling are discussed.

  18. Spatial extended hazard model with application to prostate cancer survival.

    Science.gov (United States)

    Li, Li; Hanson, Timothy; Zhang, Jiajia

    2015-06-01

    This article develops a Bayesian semiparametric approach to the extended hazard model, with generalization to high-dimensional spatially grouped data. County-level spatial correlation is accommodated marginally through the normal transformation model of Li and Lin (2006, Journal of the American Statistical Association 101, 591-603), using a correlation structure implied by an intrinsic conditionally autoregressive prior. Efficient Markov chain Monte Carlo algorithms are developed, especially applicable to fitting very large, highly censored areal survival data sets. Per-variable tests for proportional hazards, accelerated failure time, and accelerated hazards are efficiently carried out with and without spatial correlation through Bayes factors. The resulting reduced, interpretable spatial models can fit significantly better than a standard additive Cox model with spatial frailties. PMID:25521422

  19. Von Neumann's growth model: Statistical mechanics and biological applications

    Science.gov (United States)

    De Martino, A.; Marinari, E.; Romualdi, A.

    2012-09-01

    We review recent work on the statistical mechanics of Von Neumann's growth model and discuss its application to cellular metabolic networks. In this context, we present a detailed analysis of the physiological scenario underlying optimality à la Von Neumann in the metabolism of the bacterium E. coli, showing that optimal solutions are characterized by a considerable microscopic flexibility accompanied by a robust emergent picture for the key physiological functions. This suggests that the ideas behind optimal economic growth in Von Neumann's model can be helpful in uncovering functional organization principles of cell energetics.

  20. Impact of Two Realistic Mobility Models for Vehicular Safety Applications

    OpenAIRE

    RAHMAN, Md. Habibur; Nasiruddin, Mohammad

    2014-01-01

    Vehicular safety applications intended for VANETs. It can be separated by inter-vehicle communication. It is needed for a vehicle can travel safety with high velocity and must interconnect quickly dependably. In this work, examined the impact of the IDM-IM and IDM-LC mobility model on AODV, AOMDV, DSDV and OLSR routing protocol using Nakagami propagation model and IEEE 802.11p MAC protocol in a particular urban scenario of Dhaka city. The periodic broadcast (PBC) agent is employed to transmit...

  1. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  2. Proposed Bilingual Model for Right to Left Language Applications

    Directory of Open Access Journals (Sweden)

    Farhan M Al Obisat

    2016-09-01

    Full Text Available Using right to left languages (RLL in software programming requires switching the direction of many components in the interface. Preserving the original interface layout and only changing the language may result in different semantics or interpretations of the content. However, this aspect is often dismissing in the field. This research, therefore, proposes a Bilingual Model (BL to check and correct the directions in social media applications. Moreover, test-driven development (TDD For RLL, such as Arabic, is considered in the testing methodologies. Similarly, the bilingual analysis has to follow both the TDD and BL models.

  3. Modeling lifetime of high power IGBTs in wind power applications

    DEFF Research Database (Denmark)

    Busca, Cristian

    2011-01-01

    The wind power industry is continuously developing bringing to the market larger and larger wind turbines. Nowadays reliability is more of a concern than in the past especially for the offshore wind turbines since the access to offshore wind turbines in case of failures is both costly and difficult...... an overview of the different aspects of lifetime modeling of high power IGBTs in wind power applications. In the beginning, wind turbine reliability survey results are briefly reviewed in order to gain an insight into wind turbine subassembly failure rates and associated downtimes. After that the...... most common high power IGBT failure mechanisms and lifetime prediction models are reviewed in more detail....

  4. Sensors advancements in modeling, design issues, fabrication and practical applications

    CERN Document Server

    Mukhopadhyay, Subhash Chandra

    2008-01-01

    Sensors are the most important component in any system and engineers in any field need to understand the fundamentals of how these components work, how to select them properly and how to integrate them into an overall system. This book has outlined the fundamentals, analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors, electromagnetic, capacitive, ultrasonic, vision, Terahertz, displacement, fibre-optic and so on. The book: addresses the identification, modeling, selection, operation and integration of a wide variety of se

  5. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  6. Fired Models of Air-gun Source and Its Application

    Institute of Scientific and Technical Information of China (English)

    Luo Guichun; Ge Hongkui; Wang Baoshan; Hu Ping; Mu Hongwang; Chen Yong

    2008-01-01

    Air-gun is an important active seismic source. With the development of the theory about air-gun array, the technique for air-gun array design becomes mature and is widely used in petroleum exploration and geophysics. In order to adapt it to different research domains,different combination and fired models are needed. At the present time, there are two firedmodels of air-gun source, namely, reinforced initial pulse and reinforced first bubble pulse.The fired time, space between single guns, frequency and resolution of the two models are different. This comparison can supply the basis for its extensive application.

  7. Application of Kalman Filter on modelling interest rates

    Directory of Open Access Journals (Sweden)

    Long H. Vo

    2014-03-01

    Full Text Available This study aims to test the feasibility of using a data set of 90-day bank bill forward rates from the Australian market to predict spot interest rates. To achieve this goal I utilized the application of Kalman Filter in a state space model with time-varying state variable. It is documented that in the case of short-term interest rates,the state space model yields robust predictive power. In addition, this predictive power of implied forward rate is heavily impacted by the existence of a time-varying risk premium in the term structure.

  8. Numerical modeling in electroporation-based biomedical applications

    OpenAIRE

    Pavšelj, Nataša; Miklavčič, Damijan

    2015-01-01

    Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...

  9. Numerical modeling in electroporation-based biomedical applications:

    OpenAIRE

    Miklavčič, Damijan; Pavšelj, Nataša

    2008-01-01

    Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...

  10. Defined Contribution Model: Definition, Theory and an Application for Turkey

    OpenAIRE

    Metin Ercen; Deniz Gokce

    1998-01-01

    Based on a numerical application that employs social and economic parameters of the Turkish economy, this study attempts to demonstrate that the current collapse in the Turkish social security system is not unavoidable. The present social security system in Turkey is based on the defined benefit model of pension provision. On the other hand, recent proposals for reform in the social security system are based on a multipillar system, where one of the alternatives is a defined contribution pens...

  11. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  12. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  13. Generalized Bogoliubov Polariton Model: An Application to Stock Exchange Market

    Science.gov (United States)

    Thuy Anh, Chu; Anh, Truong Thi Ngoc; Lan, Nguyen Tri; Viet, Nguyen Ai

    2016-06-01

    A generalized Bogoliubov method for investigation non-simple and complex systems was developed. We take two branch polariton Hamiltonian model in second quantization representation and replace the energies of quasi-particles by two distribution functions of research objects. Application to stock exchange market was taken as an example, where the changing the form of return distribution functions from Boltzmann-like to Gaussian-like was studied.

  14. House Price Risk Models for Banking and Insurance Applications

    OpenAIRE

    Katja Hanewald; Michael Sherris

    2011-01-01

    The recent international credit crisis has highlighted the significant exposure that banks and insurers, especially mono-line credit insurers, have to residential house price risk. This paper provides an assessment of risk models for residential property for applications in banking and insurance including pricing, risk management, and portfolio management. Risk factors and heterogeneity of house price returns are assessed at a postcode-level for house prices in the major capital city of Sydne...

  15. Predictive Modeling of Addiction Lapses in a Mobile Health Application

    OpenAIRE

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M.; Isham, Andrew; Judkins-Fisher, Chris L.; Atwood, Amy K.; Gustafson, David H.

    2013-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-Comprehensive Health Enhancement Support System (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who re...

  16. APPLICATION OF LANDUSE CHANGE MODELING FOR PROTECTED AREA MONITORING

    OpenAIRE

    Jaafari, Shirkou; Shabani, Afshin Alizadeh; Danehkar, Afshin; Nazarisamani, Aliakbar

    2014-01-01

    Globally, land use change impacts biodiversity, water and radiation budgets, emission of green house gases, carbon cycling, and livelihoods. The study of LUCC and its dynamics is crucial for environmental management, especially with regard to sustainable agriculture and forestry. Different models, in terms of structure and application, have been used to understand LUCC dynamics. The present study aims to simulate the spatial pattern of land use change in Varjin protected area, Iran. Land cove...

  17. A Basic Business Model for Commercial Application of Identification Tools

    OpenAIRE

    Kittl, Christian; Schalk, Peter; Dorigo Salamon, Nicola; Martellos, Stefano

    2010-01-01

    Within the three-year EU project KeyToNature various identification tools and applications in formal education for teaching biodiversity have been researched and developed. Building on the competencies of the involved partner organisations and the expertise gained in this domain, the paper outlines a business model which aims at commercially exploiting the project results on a broader scale by describing the value proposition, products & services, value architecture, revenue...

  18. An overview of recent applications of computational modelling in neonatology.

    Science.gov (United States)

    Wrobel, Luiz C; Ginalski, Maciej K; Nowak, Andrzej J; Ingham, Derek B; Fic, Anna M

    2010-06-13

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass-transfer mechanisms taking place in medical devices, such as incubators, radiant warmers and oxygen hoods. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and improving the design of medical devices. PMID:20439275

  19. Mathematical modeling of magnetostrictive nanowires for sensor application

    OpenAIRE

    Shankar, Krishnan

    2011-01-01

    Magnetostrictive wires of diameter in the nanometer scale have been proposed for application as acoustic sensors [Downey et al., 2008], [Yang et al., 2006]. The sensing mechanism is expected to operate in the bending regime. In this work we derive a variational theory for the bending of magnetostrictive nanowires starting from a full 3-dimensional continuum theory of magnetostriction. We recover a theory which looks like a typical Euler-Bernoulli bending model but includes an extra term contr...

  20. Applications of aerosol model in the reactor containment

    OpenAIRE

    Mossad Slama; Mohammad Omar Shaker; Ragaa Aly; Magdy Sirwah

    2014-01-01

    The study simulates of aerosol dynamics including coagulation, deposition and source reinforcement. Typical applications are for nuclear reactor aerosols, aerosol reaction chambers and the production of purified materials. The model determines the aerosol number and volume distributions for an arbitrary number of particle-size classes, called sections. The user specifies the initial aerosol size distribution and the source generation rate of each component in each section. For spatially ho...

  1. Advance in Application of Regional Climate Models in China

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei; YAN Minhua; CHEN Panqin; XU Helan

    2008-01-01

    Regional climate models have become the powerful tools for simulating regional climate and its changeprocess and have been widely used in China. Using regional climate models, some research results have been obtainedon the following aspects: 1) the numerical simulation of East Asian monsoon climate, including exceptional monsoonprecipitation, summer precipitation distribution, East Asian circulation, multi-year climate average condition, summerrain belt and so on; 2) the simulation of arid climate of the western China, including thermal effect of the Qing-hai-Tibet Plateau, the plateau precipitation in the Qilian Mountains; and the impacts of greenhouse effects (CO2 dou-bling) upon climate in the western China; and 3) the simulation of the climate effect of underlying surface changes, in-cluding the effect of soil on climate formation, the influence of terrain on precipitation, the effect of regional soil deg-radation on regional climate, the effect of various underlying surfaces on regional climate, the effect of land-sea con-trast on the climate formulation, the influence of snow cover over the plateau regions on the regional climate, the effectof vegetation changes on the regional climate, etc. In the process of application of regional climate models, the prefer-ences of the models are improved so that better simulation results are gotten. At last, some suggestions are made aboutthe application of regional climate models in regional climate research in the future.

  2. Examination of compensatory model application in site selection.

    Science.gov (United States)

    Zaredar, Narges; Kheirkhah Zarkesh, Mir Masoud

    2012-01-01

    Nowadays, usage of compensatory models in land evaluation process and site selection is broadly observed. Meanwhile, methods such as analytical hierarchy process are used more than the others. Whether the usage of compensatory model methods in site selection studies is essentially correct or not is an issue that should be hesitated. In this model, inefficiency of a factor, to be offset by strength of others, means that in urban development site selection, if somewhere has unsuitable aspect but suitable slope, the aspect deficiency will be compensated by the slope. But quodlibet reflection is that in this model, sometimes, deficiency of proximity to fault can be compensated by other parameters strength. And finally, a place is diagnosed as a suitable area for urban development that is close to the fault. This research aims at examination of compensatory models application in urban development site selection of Taleghan Basin using analytical hierarchy process as one of the compensatory model methods. Among this the weaknesses and strengths of compensatory model were analyzed carefully. Results indicate that despite usefulness of this model in being fast, easy, and low expense, it has some weaknesses like very high sensitivity to decision maker's idea. PMID:21442190

  3. Research on mixed network architecture collaborative application model

    Science.gov (United States)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  4. Receptor modeling application framework for particle source apportionment.

    Science.gov (United States)

    Watson, John G; Zhu, Tan; Chow, Judith C; Engelbrecht, Johann; Fujita, Eric M; Wilson, William E

    2002-12-01

    Receptor models infer contributions from particulate matter (PM) source types using multivariate measurements of particle chemical and physical properties. Receptor models complement source models that estimate concentrations from emissions inventories and transport meteorology. Enrichment factor, chemical mass balance, multiple linear regression, eigenvector. edge detection, neural network, aerosol evolution, and aerosol equilibrium models have all been used to solve particulate air quality problems, and more than 500 citations of their theory and application document these uses. While elements, ions, and carbons were often used to apportion TSP, PM10, and PM2.5 among many source types, many of these components have been reduced in source emissions such that more complex measurements of carbon fractions, specific organic compounds, single particle characteristics, and isotopic abundances now need to be measured in source and receptor samples. Compliance monitoring networks are not usually designed to obtain data for the observables, locations, and time periods that allow receptor models to be applied. Measurements from existing networks can be used to form conceptual models that allow the needed monitoring network to be optimized. The framework for using receptor models to solve air quality problems consists of: (1) formulating a conceptual model; (2) identifying potential sources; (3) characterizing source emissions; (4) obtaining and analyzing ambient PM samples for major components and source markers; (5) confirming source types with multivariate receptor models; (6) quantifying source contributions with the chemical mass balance; (7) estimating profile changes and the limiting precursor gases for secondary aerosols; and (8) reconciling receptor modeling results with source models, emissions inventories, and receptor data analyses.

  5. A double continuum hydrological model for glacier applications

    Science.gov (United States)

    de Fleurian, B.; Gagliardini, O.; Zwinger, T.; Durand, G.; Le Meur, E.; Mair, D.; Råback, P.

    2014-01-01

    The flow of glaciers and ice streams is strongly influenced by the presence of water at the interface between ice and bed. In this paper, a hydrological model evaluating the subglacial water pressure is developed with the final aim of estimating the sliding velocities of glaciers. The global model fully couples the subglacial hydrology and the ice dynamics through a water-dependent friction law. The hydrological part of the model follows a double continuum approach which relies on the use of porous layers to compute water heads in inefficient and efficient drainage systems. This method has the advantage of a relatively low computational cost that would allow its application to large ice bodies such as Greenland or Antarctica ice streams. The hydrological model has been implemented in the finite element code Elmer/Ice, which simultaneously computes the ice flow. Herein, we present an application to the Haut Glacier d'Arolla for which we have a large number of observations, making it well suited to the purpose of validating both the hydrology and ice flow model components. The selection of hydrological, under-determined parameters from a wide range of values is guided by comparison of the model results with available glacier observations. Once this selection has been performed, the coupling between subglacial hydrology and ice dynamics is undertaken throughout a melt season. Results indicate that this new modelling approach for subglacial hydrology is able to reproduce the broad temporal and spatial patterns of the observed subglacial hydrological system. Furthermore, the coupling with the ice dynamics shows good agreement with the observed spring speed-up.

  6. 4Mx Soil-Plant Model: Applications, Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Nándor Fodor

    2012-12-01

    Full Text Available Crop simulation models describe the main processes of the soil-plant system in a dynamic way usually in a daily time-step. With the help of these models we may monitor the soil- and plant-related processes of the simulated system as they evolve according to the atmospheric and environmental conditions. Crop models could be successfully applied in the following areas: (1 Education: by promoting the system-oriented thinking a comprehensive overview of the interrelations of the soil-plant system as well as of the environmental protection related aspects of the human activities could be presented. (2 Research: The results of observations as well as of experiments could be extrapolated in time and space, thus, for example, the possible effects of the global climate change could be estimated. (3 Practice: Model calculations could be used in intelligent irrigation control and decision supporting systems as well as for providing scientific background for policy makers. The most spectacular feature of the 4Mx crop model is that its graphical user interface enables the user to alter not only the parameters of the model but the function types of its governing equations as well. The applicability of the 4Mx model is presented via several case-studies.

  7. Application of data fusion modeling (DFM) to site characterization

    International Nuclear Information System (INIS)

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification of uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points

  8. Numerical modelling of channel migration with application to laboratory rivers

    Institute of Scientific and Technical Information of China (English)

    Jian SUN; Bin-liang LIN; Hong-wei KUANG

    2015-01-01

    The paper presents the development of a morphological model and its application to experimental model rivers. The model takes into account the key processes of channel migration, including bed deformation, bank failure and wetting and drying. Secondary flows in bends play an important role in lateral sediment transport, which further affects channel migration. A new formula has been derived to predict the near-bed secondary flow speed, in which the magnitude of the speed is linked to the lateral water level gradient. Since only non-cohesive sediment is considered in the current study, the bank failure is modelled based on the concept of submerged angle of repose. The wetting and drying process is modelled using an existing method. Comparisons between the numerical model predictions and experimental observations for various discharges have been made. It is found that the model predicted channel planform and cross-sectional shapes agree generally well with the laboratory observations. A scenario analysis is also carried out to investigate the impact of secondary flow on the channel migration process. It shows that if the effect of secondary flow is ignored, the channel size in the lateral direction will be seriously underestimated.

  9. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  10. New Trends in Model Coupling Theory, Numerics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Coquel, F. [CMAP Ecole Polytech, CNRS, UMR 7641, F-91128 Palaiseau (France); Godlewski, E. [UPMC Univ Paris 6, UMR 7598, Lab Jacques Louis Lions, F-75005 Paris (France); Herard, J. M. [EDF RD, F-78400 Chatou (France); Segre, J. [CEA Saclay, DEN, DM2S, F-91191 Gif Sur Yvette (France)

    2010-07-01

    This special issue comprises selected papers from the workshop New Trends in Model Coupling, Theory, Numerics and Applications (NTMC'09) which took place in Paris, September 2 - 4, 2009. The research of optimal technological solutions in a large amount of industrial systems requires to perform numerical simulations of complex phenomena which are often characterized by the coupling of models related to various space and/or time scales. Thus, the so-called multi-scale modelling has been a thriving scientific activity which connects applied mathematics and other disciplines such as physics, chemistry, biology or even social sciences. To illustrate the variety of fields concerned by the natural occurrence of model coupling we may quote: meteorology where it is required to take into account several turbulence scales or the interaction between oceans and atmosphere, but also regional models in a global description, solid mechanics where a thorough understanding of complex phenomena such as propagation of cracks needs to couple various models from the atomistic level to the macroscopic level; plasma physics for fusion energy for instance where dense plasmas and collisionless plasma coexist; multiphase fluid dynamics when several types of flow corresponding to several types of models are present simultaneously in complex circuits; social behaviour analysis with interaction between individual actions and collective behaviour. (authors)

  11. Current developments in soil organic matter modeling and the expansion of model applications: a review

    International Nuclear Information System (INIS)

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions. (topical review)

  12. Current developments in soil organic matter modeling and the expansion of model applications: a review

    Science.gov (United States)

    Campbell, Eleanor E.; Paustian, Keith

    2015-12-01

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions.

  13. Memcapacitor model and its application in a chaotic oscillator

    Science.gov (United States)

    Guang-Yi, Wang; Bo-Zhen, Cai; Pei-Pei, Jin; Ti-Ling, Hu

    2016-01-01

    A memcapacitor is a new type of memory capacitor. Before the advent of practical memcapacitor, the prospective studies on its models and potential applications are of importance. For this purpose, we establish a mathematical memcapacitor model and a corresponding circuit model. As a potential application, based on the model, a memcapacitor oscillator is designed, with its basic dynamic characteristics analyzed theoretically and experimentally. Some circuit variables such as charge, flux, and integral of charge, which are difficult to measure, are observed and measured via simulations and experiments. Analysis results show that besides the typical period-doubling bifurcations and period-3 windows, sustained chaos with constant Lyapunov exponents occurs. Moreover, this oscillator also exhibits abrupt chaos and some novel bifurcations. In addition, based on the digital signal processing (DSP) technology, a scheme of digitally realizing this memcapacitor oscillator is provided. Then the statistical properties of the chaotic sequences generated from the oscillator are tested by using the test suit of the National Institute of Standards and Technology (NIST). The tested randomness definitely reaches the standards of NIST, and is better than that of the well-known Lorenz system. Project supported by the National Natural Science Foundation of China (Grant Nos. 61271064, 61401134, and 60971046), the Natural Science Foundation of Zhejiang Province, China (Grant Nos. LZ12F01001 and LQ14F010008), and the Program for Zhejiang Leading Team of S&T Innovation, China (Grant No. 2010R50010).

  14. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  15. Towards Industrial Application of Damage Models for Sheet Metal Forming

    Science.gov (United States)

    Doig, M.; Roll, K.

    2011-05-01

    Due to global warming and financial situation the demand to reduce the CO2-emission and the production costs leads to the permanent development of new materials. In the automotive industry the occupant safety is an additional condition. Bringing these arguments together the preferable approach for lightweight design of car components, especially for body-in-white, is the use of modern steels. Such steel grades, also called advanced high strength steels (AHSS), exhibit a high strength as well as a high formability. Not only their material behavior but also the damage behavior of AHSS is different compared to the performances of standard steels. Conventional methods for the damage prediction in the industry like the forming limit curve (FLC) are not reliable for AHSS. Physically based damage models are often used in crash and bulk forming simulations. The still open question is the industrial application of these models for sheet metal forming. This paper evaluates the Gurson-Tvergaard-Needleman (GTN) model and the model of Lemaitre within commercial codes with a goal of industrial application.

  16. Model-Driven Visual Testing and Debugging of WSN Applications

    Directory of Open Access Journals (Sweden)

    Mohammad Al Saad

    2009-09-01

    Full Text Available We introduce our tool-suite that facilitates automated testing of applications for wireless sensor networks (WSNs. WSNs are highly distributed systems and therefore require a testing infrastructure. We present a general-purpose testing framework which enables component, integration, and system testing. When using our testing framework, test cases have to be implemented by several modules. To coordinate the execution of these modules synchronization code must be written which is a complex and time consuming task. To bypass this task and thus make the test case implementation process more efficient, we introduce a model-driven approach that delegates this task to the code generator. Thereto, the test scenario is represented in a model and the code of the modules is generated. This model also eases the task of isolating faults in the code of the application being tested because the model can be refined to get insights on the application’s behavior to backtrack the cause of the failure reproduced by the test case.

  17. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  18. Matrix variate logistic regression model with application to EEG data.

    Science.gov (United States)

    Hung, Hung; Wang, Chen-Chien

    2013-01-01

    Logistic regression has been widely applied in the field of biomedical research for a long time. In some applications, the covariates of interest have a natural structure, such as that of a matrix, at the time of collection. The rows and columns of the covariate matrix then have certain physical meanings, and they must contain useful information regarding the response. If we simply stack the covariate matrix as a vector and fit a conventional logistic regression model, relevant information can be lost, and the problem of inefficiency will arise. Motivated from these reasons, we propose in this paper the matrix variate logistic (MV-logistic) regression model. The advantages of the MV-logistic regression model include the preservation of the inherent matrix structure of covariates and the parsimony of parameters needed. In the EEG Database Data Set, we successfully extract the structural effects of covariate matrix, and a high classification accuracy is achieved.

  19. On the application of copula in modeling maintenance contract

    Science.gov (United States)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  20. Clinical application of the five-factor model.

    Science.gov (United States)

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning. PMID:22924994

  1. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  2. Practical Application of Model Checking in Software Verification

    Science.gov (United States)

    Havelund, Klaus; Skakkebaek, Jens Ulrik

    1999-01-01

    This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.

  3. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  4. Application of Z-Number Based Modeling in Psychological Research.

    Science.gov (United States)

    Aliev, Rafik; Memmedova, Konul

    2015-01-01

    Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS), Test of Attention (D2 Test), and Spielberger's Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented. PMID:26339231

  5. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  6. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  7. Modelling of Electrokinetic Processes in Civil and Environmental Engineering Applications

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.;

    2011-01-01

    increases the understanding of the main physicochemical aspects affecting the process. Results from simulations of some test examples are presented, showing the versatility of the model. Two types of enhanced methods are compared: The electrokinetic desalination of a brick sample, using carbonated clay......A mathematical model for the electrokinetic phenomena is described. Numerical simulations of different applications of electrokinetic techniques to the fields of civil and environmental engineering are included, showing the versatility and consistency of the model. The electrokinetics phenomena...... the porous materials undergoes an electroosmotic flow subject to externally applied electric fields. Electroosmotic transport makes electrokinetic techniques suitable for the mobilization of non-charged particles within the pore structure, such as the organic contaminants in soil. Chemical equilibrium...

  8. The Logistic Maturity Model: Application to a Fashion Company

    Directory of Open Access Journals (Sweden)

    Claudia Battista

    2013-08-01

    Full Text Available This paper describes the structure of the logistic maturity model (LMM in detail and shows the possible improvements that can be achieved by using this model in terms of the identification of the most appropriate actions to be taken in order to increase the performance of the logistics processes in industrial companies. The paper also gives an example of the LMM’s application to a famous Italian female fashion firm, which decided to use the model as a guideline for the optimization of its supply chain. Relying on a 5-level maturity staircase, specific achievement indicators as well as key performance indicators and best practices are defined and related to each logistics area/process/sub-process, allowing any user to easily and rapidly understand the more critical logistical issues in terms of process immaturity.

  9. Animal models of Parkinson's disease and their applications

    Directory of Open Access Journals (Sweden)

    Park HJ

    2016-07-01

    Full Text Available Hyun Jin Park, Ting Ting Zhao, Myung Koo LeeDepartment of Pharmacy, Research Center for Bioresource and Health, College of Pharmacy, Chungbuk National University, Cheongju, Republic of Korea Abstract: Parkinson's disease (PD is a progressive neurodegenerative disorder that occurs mainly due to the degeneration of dopaminergic neuronal cells in the substantia nigra. l-3,4-Dihydroxyphenylalanine (L-DOPA is the most effective known therapy for PD. However, chronic L-DOPA administration results in a loss of drug efficacy and irreversible adverse effects, including L-DOPA-induced dyskinesia, affective disorders, and cognitive function disorders. To study the motor and non-motor symptomatic dysfunctions in PD, neurotoxin and genetic animal models of PD have been widely applied. However, these animal models do not exhibit all of the pathophysiological symptoms of PD. Regardless, neurotoxin rat and mouse models of PD have been commonly used in the development of bioactive components from natural herbal medicines. Here, the main animal models of PD and their applications have been introduced in order to aid the development of therapeutic and adjuvant agents. Keywords: Parkinson's disease, neurotoxin animal models, genetic animal models, adjuvant therapeutics

  10. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. PMID:24842053

  11. Numerical algorithm of distributed TOPKAPI model and its application

    Institute of Scientific and Technical Information of China (English)

    Deng Peng; Li Zhijia; Liu Zhiyu

    2008-01-01

    The TOPKAPI (TOPographic Kinematic APproximation and Integration) model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model's computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model) grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  12. An application-semantics-based relaxed transaction model for internetware

    Institute of Scientific and Technical Information of China (English)

    HUANG Tao; DING Xiaoning; WEI Jun

    2006-01-01

    An internetware application is composed by existing individual services, while transaction processing is a key mechanism to make the composition reliable. The existing research of transactional composite service (TCS) depends on the analysis to composition structure and exception handling mechanism in order to guarantee the relaxed atomicity.However, this approach cannot handle some application-specific requirements and causes lots of unnecessary failure recoveries or even aborts. In this paper, we propose a relaxed transaction model, including system mode, relaxed atomicity criterion, static checking algorithm and dynamic enforcement algorithm. Users are able to define different relaxed atomicity constraint for different TCS according to application-specific requirements, including acceptable configurations and the preference order. The checking algorithm determines whether the constraint can be guaranteed to be satisfied. The enforcement algorithm monitors the execution and performs transaction management work according to the constraint. Compared to the existing work, our approach can handle complex application requirements, avoid unnecessary failure recoveries and perform the transaction management work automatically.

  13. Prognostic models in obstetrics: available, but far from applicable.

    Science.gov (United States)

    Kleinrouweler, C Emily; Cheong-See, Fiona M; Collins, Gary S; Kwee, Anneke; Thangaratinam, Shakila; Khan, Khalid S; Mol, Ben Willem J; Pajkrt, Eva; Moons, Karel G M; Schuit, Ewoud

    2016-01-01

    Health care provision is increasingly focused on the prediction of patients' individual risk for developing a particular health outcome in planning further tests and treatments. There has been a steady increase in the development and publication of prognostic models for various maternal and fetal outcomes in obstetrics. We undertook a systematic review to give an overview of the current status of available prognostic models in obstetrics in the context of their potential advantages and the process of developing and validating models. Important aspects to consider when assessing a prognostic model are discussed and recommendations on how to proceed on this within the obstetric domain are given. We searched MEDLINE (up to July 2012) for articles developing prognostic models in obstetrics. We identified 177 papers that reported the development of 263 prognostic models for 40 different outcomes. The most frequently predicted outcomes were preeclampsia (n = 69), preterm delivery (n = 63), mode of delivery (n = 22), gestational hypertension (n = 11), and small-for-gestational-age infants (n = 10). The performance of newer models was generally not better than that of older models predicting the same outcome. The most important measures of predictive accuracy (ie, a model's discrimination and calibration) were often (82.9%, 218/263) not both assessed. Very few developed models were validated in data other than the development data (8.7%, 23/263). Only two-thirds of the papers (62.4%, 164/263) presented the model such that validation in other populations was possible, and the clinical applicability was discussed in only 11.0% (29/263). The impact of developed models on clinical practice was unknown. We identified a large number of prognostic models in obstetrics, but there is relatively little evidence about their performance, impact, and usefulness in clinical practice so that at this point, clinical implementation cannot be recommended. New efforts should be directed

  14. The determination of the most applicable PWV model for Turkey

    Science.gov (United States)

    Deniz, Ilke; Gurbuz, Gokhan; Mekik, Cetin

    2016-07-01

    Water vapor is a key component for modelling atmosphere and climate studies. Moreover, long-term water vapor changes can be an independent source for detecting climate changes. Since Global Navigation Satellite Systems (GNSS) use microwaves passing through the atmosphere, atmospheric effects are modeled with high accuracy. Tropospheric effects on GNSS signals are estimated with total zenith delay parameter (ZTD) which is the sum of hydrostatic (ZHD) and wet zenith delay (ZWD). The first component can be obtained from meteorological observations with high accuracy; the second component, however, can be computed by subtracting ZHD from ZTD (ZWD=ZTD-ZHD). Afterwards, the weighted mean temperature (Tm) or the conversion factor (Q) is used for the conversion between the precipitable water vapor (PWV) and ZWD. The parameters Tm and Q are derived from the analysis of radiosonde stations' profile observations. Numerous Q and Tm models have been developed for each radiosonde station, radiosonde station group, countries and global fields such as Bevis Tm model and Emardson and Derks' Q models. So, PWV models (Tm and Q models) applied for Turkey have been developed using a year of radiosonde data (2011) from 8 radiosonde stations. In this study the models developed are tested by comparing PWVGNSS computed applying Tm and Q models to the ZTD estimates derived by Bernese and GAMIT/GLOBK software at GNSS stations established at Istanbul and Ankara with those from the collocated radiosonde stations (PWVRS) from October 2013 to December 2014 with the data obtained from a project (no 112Y350) supported by the Scientific and Technological Research Council of Turkey (TUBITAK). The comparison results show that PWVGNSS and PWVRS are in high correlation (86 % for Ankara and 90% for Istanbul). Thus, the most applicable model for Turkey and the accuracy of GNSS meteorology are investigated. In addition, Tm model was applied to the ZTD estimates of 20 TUSAGA-Active (CORS-TR) stations in

  15. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  16. Bilayer Graphene Application on NO2 Sensor Modelling

    Directory of Open Access Journals (Sweden)

    Elnaz Akbari

    2014-01-01

    Full Text Available Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using graphene to achieve accuracy and higher speed in detection devices utilized in cases where gas sensing is required. Although there are plenty of experimental studies in this field, the lack of analytical models is felt deeply. To start with modelling, the field effect transistor- (FET- based structure has been chosen to serve as the platform and bilayer graphene density of state variation effect by NO2 injection has been discussed. The chemical reaction between graphene and gas creates new carriers in graphene which cause density changes and eventually cause changes in the carrier velocity. In the presence of NO2 gas, electrons are donated to the FET channel which is employed as a sensing mechanism. In order to evaluate the accuracy of the proposed models, the results obtained are compared with the existing experimental data and acceptable agreement is reported.

  17. Application of Interval Predictor Models to Space Radiation Shielding

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  18. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  19. Modeling Phosphorous Losses from Seasonal Manure Application Schemes

    Science.gov (United States)

    Menzies, E.; Walter, M. T.

    2015-12-01

    Excess nutrient loading, especially nitrogen and phosphorus, to surface waters is a common and significant problem throughout the United States. While pollution remediation efforts are continuously improving, the most effective treatment remains to limit the source. Appropriate timing of fertilizer application to reduce nutrient losses is currently a hotly debated topic in the Northeastern United States; winter spreading of manure is under special scrutiny. We plan to evaluate the loss of phosphorous to surface waters from agricultural systems under varying seasonal fertilization schemes in an effort to determine the impacts of fertilizers applied throughout the year. The Cayuga Lake basin, located in the Finger Lakes region of New York State, is a watershed dominated by agriculture where a wide array of land management strategies can be found. The evaluation will be conducted on the Fall Creek Watershed, a large sub basin in the Cayuga Lake Watershed. The Fall Creek Watershed covers approximately 33,000 ha in central New York State with approximately 50% of this land being used for agriculture. We plan to use the Soil and Water Assessment Tool (SWAT) to model a number of seasonal fertilization regimes such as summer only spreading and year round spreading (including winter applications), as well as others. We will use the model to quantify the phosphorous load to surface waters from these different fertilization schemes and determine the impacts of manure applied at different times throughout the year. More detailed knowledge about how seasonal fertilization schemes impact phosphorous losses will provide more information to stakeholders concerning the impacts of agriculture on surface water quality. Our results will help farmers and extensionists make more informed decisions about appropriate timing of manure application for reduced phosphorous losses and surface water degradation as well as aid law makers in improving policy surrounding manure application.

  20. Numerical algorithm of distributed TOPKAPI model and its application

    Directory of Open Access Journals (Sweden)

    Deng Peng

    2008-12-01

    Full Text Available The TOPKAPI (TOPographic Kinematic APproximation and Integration model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model’s computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  1. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  2. Cognitive interference modeling with applications in power and admission control

    KAUST Repository

    Mahmood, Nurul Huda

    2012-10-01

    One of the key design challenges in a cognitive radio network is controlling the interference generated at coexisting primary receivers. In order to design efficient cognitive radio systems and to minimize their unwanted consequences, it is therefore necessary to effectively control the secondary interference at the primary receivers. In this paper, a generalized framework for the interference analysis of a cognitive radio network where the different secondary transmitters may transmit with different powers and transmission probabilities, is presented and various applications of this interference model are demonstrated. The findings of the analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2012 IEEE.

  3. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  4. Improving of ANOVA estimation in mixed linear models%线性混合模型中方差分量的ANOVA估计的改进

    Institute of Scientific and Technical Information of China (English)

    范永辉; 王松桂

    2007-01-01

    讨论了在含三个方差分量的线性混合模型中,在均方误差意义下,方差分量的方差分析估计的改进,并把这一结果推广到一般的线性混合模型上,得到一个改进方差分析估计的简单方法.

  5. Application of the ACASA model for urban development studies

    Science.gov (United States)

    Marras, S.; Pyles, R. D.; Falk, M.; Snyder, R. L.; Paw U, K. T.; Blecic, I.; Trunfio, G. A.; Cecchini, A.; Spano, D.

    2012-04-01

    Since urban population is growing fast and urban areas are recognized as the major source of CO2 emissions, more attention has being dedicated to the topic of urban sustainability and its connection with the climate. Urban flows of energy, water and carbon have an important impact on climate change and their quantification is pivotal in the city design and management. Large effort has been devoted to quantitative estimates of the urban metabolism components, and several advanced models have been developed and used at different spatial and temporal scales for this purpose. However, it is necessary to develop suitable tools and indicators to effectively support urban planning and management with the goal of achieving a more sustainable metabolism in the urban environment. In this study, the multilayer model ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) was chosen to simulate the exchanges of heat, water vapour and CO2 within and above urban canopy. After several calibration and evaluation tests over natural and agricultural ecosystems, the model was recently modified for application in urban and peri-urban areas. New equations to account for the anthropogenic contribution to heat exchange and carbon production, as well as key parameterizations of leaf-facet scale interactions to separate both biogenic and anthropogenic flux sources and sinks, were added to test changes in land use or urban planning strategies. The analysis was based on the evaluation of the ACASA model performance in estimating urban metabolism components at local scale. Simulated sensible heat, latent heat, and carbon fluxes were compared with in situ Eddy Covariance measurements collected in the city centre of Florence (Italy). Statistical analysis was performed to test the model accuracy and reliability. Model sensitivity to soil types and increased population density values was conducted to investigate the potential use of ACASA for evaluating the impact of planning alternative scenarios. In

  6. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    Science.gov (United States)

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics. PMID:26091413

  7. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    Directory of Open Access Journals (Sweden)

    Takao Shimayoshi

    Full Text Available Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics.

  8. Chemical kinetic modeling of H{sub 2} applications

    Energy Technology Data Exchange (ETDEWEB)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-09-01

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamic computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.

  9. Brookhaven Regional Energy Facility Siting Model (REFS): model development and application

    Energy Technology Data Exchange (ETDEWEB)

    Meier, P.; Hobbs, B.; Ketcham, G.; McCoy, M.; Stern, R.

    1979-06-01

    A siting methodology developed specifically to bridge the gap between regional-energy-system scenarios and environmental transport models is documented. Development of the model is described in Chapter 1. Chapter 2 described the basic structure of such a model. Additional chapters on model development cover: generation, transmission, demand disaggregation, the interface to other models, computational aspects, the coal sector, water resource considerations, and air quality considerations. These subjects comprise Part I. Part II, Model Applications, covers: analysis of water resource constraints, water resource issues in the New York Power Pool, water resource issues in the New England Power Pool, water resource issues in the Pennsylvania-Jersey-Maryland Power Pool, and a summary of water resource constraint analysis. (MCW)

  10. Application of the GRC Stirling Convertor System Dynamic Model

    Science.gov (United States)

    Regan, Timothy F.; Lewandowski, Edward J.; Schreiber, Jeffrey G. (Technical Monitor)

    2004-01-01

    The GRC Stirling Convertor System Dynamic Model (SDM) has been developed to simulate dynamic performance of power systems incorporating free-piston Stirling convertors. This paper discusses its use in evaluating system dynamics and other systems concerns. Detailed examples are provided showing the use of the model in evaluation of off-nominal operating conditions. The many degrees of freedom in both the mechanical and electrical domains inherent in the Stirling convertor and the nonlinear dynamics make simulation an attractive analysis tool in conjunction with classical analysis. Application of SDM in studying the relationship of the size of the resonant circuit quality factor (commonly referred to as Q) in the various resonant mechanical and electrical sub-systems is discussed.

  11. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  12. An Application of Finite Element Modelling to Pneumatic Artificial Muscle

    Directory of Open Access Journals (Sweden)

    R. Ramasamy

    2005-01-01

    Full Text Available The purpose of this article was to introduce and to give an overview of the Pneumatic Artificial Muscles (PAMs as a whole and to discuss its numerical modelling, using the Finite Element (FE Method. Thus, more information to understand on its behaviour in generating force for actuation was obtained. The construction of PAMs was mainly consists of flexible, inflatable membranes which having orthotropic material behaviour. The main properties influencing the PAMs will be explained in terms of their load-carrying capacity and low weight in assembly. Discussion on their designs and capacity to function as locomotion device in robotics applications will be laid out, followed by FE modelling to represent PAMs overall structural behaviour under any potential operational conditions.

  13. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  14. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  15. Application of Service Quality Model in Education Environment

    Directory of Open Access Journals (Sweden)

    Ting Ding Hooi

    2016-02-01

    Full Text Available Most of the ideas on service quality stem from the West. The massive developments in research in the West are undeniable of their importance. This leads to the generation and development of new ideas. These ideas were subsequently channeled to developing countries. Ideas obtained were then formulated and used by these developing countries in order to obtain better approach in channeling service quality. There are ample to be learnt from the service quality model, SERVQUAL which attain high acceptance in the West. Service quality in the education system is important to guarantee the effectiveness and quality of education. Effective and quality education will be able to offer quality graduates, which will contribute to the development of the nation. This paper will discuss the application of the SERVQUAL model into the education environment.

  16. Land Surface Modeling Applications for Famine Early Warning

    Science.gov (United States)

    McNally, A.; Verdin, J. P.; Peters-Lidard, C. D.; Arsenault, K. R.; Wang, S.; Kumar, S.; Shukla, S.; Funk, C. C.; Pervez, M. S.; Fall, G. M.; Karsten, L. R.

    2015-12-01

    AGU 2015 Fall Meeting Session ID#: 7598 Remote Sensing Applications for Water Resources Management Land Surface Modeling Applications for Famine Early Warning James Verdin, USGS EROS Christa Peters-Lidard, NASA GSFC Amy McNally, NASA GSFC, UMD/ESSIC Kristi Arsenault, NASA GSFC, SAIC Shugong Wang, NASA GSFC, SAIC Sujay Kumar, NASA GSFC, SAIC Shrad Shukla, UCSB Chris Funk, USGS EROS Greg Fall, NOAA Logan Karsten, NOAA, UCAR Famine early warning has traditionally required close monitoring of agro-climatological conditions, putting them in historical context, and projecting them forward to anticipate end-of-season outcomes. In recent years, it has become necessary to factor in the effects of a changing climate as well. There has also been a growing appreciation of the linkage between food security and water availability. In 2009, Famine Early Warning Systems Network (FEWS NET) science partners began developing land surface modeling (LSM) applications to address these needs. With support from the NASA Applied Sciences Program, an instance of the Land Information System (LIS) was developed to specifically support FEWS NET. A simple crop water balance model (GeoWRSI) traditionally used by FEWS NET took its place alongside the Noah land surface model and the latest version of the Variable Infiltration Capacity (VIC) model, and LIS data readers were developed for FEWS NET precipitation forcings (NOAA's RFE and USGS/UCSB's CHIRPS). The resulting system was successfully used to monitor and project soil moisture conditions in the Horn of Africa, foretelling poor crop outcomes in the OND 2013 and MAM 2014 seasons. In parallel, NOAA created another instance of LIS to monitor snow water resources in Afghanistan, which are an early indicator of water availability for irrigation and crop production. These successes have been followed by investment in LSM implementations to track and project water availability in Sub-Saharan Africa and Yemen, work that is now underway. Adoption of

  17. Application of data assimilation to solar wind forecasting models

    Science.gov (United States)

    Innocenti, M.; Lapenta, G.; Vrsnak, B.; Temmer, M.; Veronig, A.; Bettarini, L.; Lee, E.; Markidis, S.; Skender, M.; Crespon, F.; Skandrani, C.; Soteria Space-Weather Forecast; Data Assimilation Team

    2010-12-01

    Data Assimilation through Kalman filtering [1,2] is a powerful statistical tool which allows to combine modeling and observations to increase the degree of knowledge of a given system. We apply this technique to the forecast of solar wind parameters (proton speed, proton temperature, absolute value of the magnetic field and proton density) at 1 AU, using the model described in [3] and ACE data as observations. The model, which relies on GOES 12 observations of the percentage of the meridional slice of the sun covered by coronal holes, grants 1-day and 6-hours in advance forecasts of the aforementioned quantities in quiet times (CMEs are not taken into account) during the declining phase of the solar cycle and is tailored for specific time intervals. We show that the application of data assimilation generally improves the quality of the forecasts during quiet times and, more notably, extends the periods of applicability of the model, which can now provide reliable forecasts also in presence of CMEs and for periods other than the ones it was designed for. Acknowledgement: The research leading to these results has received funding from the European Commission’s Seventh Framework Programme (FP7/2007-2013) under the grant agreement N. 218816 (SOTERIA project: http://www.soteria-space.eu). References: [1] R. Kalman, J. Basic Eng. 82, 35 (1960); [2] G. Welch and G. Bishop, Technical Report TR 95-041, University of North Carolina, Department of Computer Science (2001); [3] B. Vrsnak, M. Temmer, and A. Veronig, Solar Phys. 240, 315 (2007).

  18. Acoustic Propagation Modeling for Marine Hydro-Kinetic Applications

    Science.gov (United States)

    Johnson, C. N.; Johnson, E.

    2014-12-01

    The combination of riverine, tidal, and wave energy have the potential to supply over one third of the United States' annual electricity demand. However, in order to deploy and test prototypes, and commercial installations, marine hydrokinetic (MHK) devices must meet strict regulatory guidelines that determine the maximum amount of noise that can be generated and sets particular thresholds for determining disturbance and injury caused by noise. An accurate model for predicting the propagation of a MHK source in a real-life hydro-acoustic environment has been established. This model will help promote the growth and viability of marine, water, and hydrokinetic energy by confidently assuring federal regulations are meet and harmful impacts to marine fish and wildlife are minimal. Paracousti, a finite difference solution to the acoustic equations, was originally developed for sound propagation in atmospheric environments and has been successfully validated for a number of different geophysical activities. The three-dimensional numerical implementation is advantageous over other acoustic propagation techniques for a MHK application where the domains of interest have complex 3D interactions from the seabed, banks, and other shallow water effects. A number of different cases for hydro-acoustic environments have been validated by both analytical and numerical results from canonical and benchmark problems. This includes a variety of hydrodynamic and physical environments that may be present in a potential MHK application including shallow and deep water, sloping, and canyon type bottoms, with varying sound speed and density profiles. With the model successfully validated for hydro-acoustic environments more complex and realistic MHK sources from turbines and/or arrays can be modeled.

  19. Com aplicar les proves paramètriques bivariades t de Student i ANOVA en SPSS. Cas pràctic

    Directory of Open Access Journals (Sweden)

    María-José Rubio-Hurtado

    2012-07-01

    Full Text Available Les proves paramètriques són un tipus de proves de significació estadística que quantifiquen l'associació o independència entre una variable quantitativa i una categòrica. Les proves paramètriques són exigents amb certs requisits previs per a la seva aplicació: la distribució Normal de la variable quantitativa en els grups que es comparen, l'homogeneïtat de variàncies en les poblacions de les quals procedeixen els grups i una n mostral no inferior a 30. El seu no compliment comporta la necessitat de recórrer a proves estadístiques no paramètriques. Les proves paramètriques es classifiquen en dos: prova t (per a una mostra o per a dues mostres relacionades o independents i prova ANOVA (per a més de dues mostres independents.

  20. Comparison between ANOVA estimator and SD estimator under balanced data%平衡数据结构下ANOVA估计和SD估计的比较

    Institute of Scientific and Technical Information of China (English)

    吴密霞; 孙兵

    2013-01-01

    在线性混合效应模型下,方差分析(ANOVA)估计和谱分解(SD)估计对构造精确检验和广义P-值枢轴量起着非常重要的作用.尽管这两估计分别基于不同的方法,但它们共享许多类似的优点,如无偏性和有精确的表达式等.本文借助于已得到的协方差阵的谱分解结果,揭示了平衡数据一般线性混合效应模型下ANOVA估计与SD估计的关系,并分别针对协方差阵两种结构:套结构和多项分类随机效应结构,给出了ANOVA估计与SD估计等价的充分必要条件.

  1. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  2. Conceptual study of the application software manager using the Xlet model in the nuclear fields

    International Nuclear Information System (INIS)

    In order to reduce the cost of software maintenance including software modification, we suggest the object oriented program with checking the version of application program using the Java language and the technique of executing the downloaded application program via network using the application manager. In order to change the traditional scheduler to the application manager we have adopted the Xlet concept in the nuclear fields using the network. In usual Xlet means a Java application that runs on the digital television receiver. The Java TV Application Program Interface(API) defines an application model called the Xlet application lifecycle. Java applications that use this lifecycle model are called Xlets. The Xlet application lifecycle is compatible with the existing application environment and virtual machine technology. The Xlet application lifecycle model defines the dialog (protocol) between an Xlet and its environment

  3. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    Science.gov (United States)

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  4. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2016-02-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  5. The GCE SYGMA (Stellar Yields for Galaxy Modeling Applications) module

    International Nuclear Information System (INIS)

    The Stellar Yields for Galactic Modelling Applications (SYGMA) module combines the NuGrid yields and other stellar feedback in a single Python, Fortran or web accessible framework. The module provides the time evolution of the abundances of all the chemical elements of 'star particles' that represent single stellar populations (SSPs). It delivers the AGB, SN Ia and massive star contributions of material returned by the SSP after a star-formation burst. Various (including user-supplied) options for standard parameters of chemical evolutions, such as IMF, SN Ia delay-time distribution and SFR are available. An example application of the module would be to model the baryonic feedback of cosmological structure formation simulations. The module can also be used to describe galactic chemical evolution in the simple single-box approximation. Furthermore, we offer a light version of SYGMA as a vehicle to explore the large NuGrid datasets with an online interface. This allows the community to visualize and perform calculations with sets of data in a interactive python environment. (author)

  6. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  7. Hamiltonian realization of power system dynamic models and its applications

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Power system is a typical energy system. Because Hamiltonian approaches are closely related to the energy of the physical system, they have been widely re-searched in recent years. The realization of the Hamiltonian structure of the nonlinear dynamic system is the basis for the application of the Hamiltonian methods. However, there have been no systematically investigations on the Ham-iltonian realization for different power system dynamic models so far. This paper researches the Hamiltonian realization in power systems dynamics. Starting from the widely used power system dynamic models, the paper reveals the intrinsic Hamiltonian structure of the nonlinear power system dynamics and also proposes approaches to formulate the power system Hamiltonian structure. Furthermore, this paper shows the application of the Hamiltonian structure of the power system dynamics to design non-smooth controller considering the nonlinear ceiling effects from the real physical limits. The general procedure to design controllers via the Hamiltonian structure is also summarized in the paper. The controller design based on the Hamiltonian structure is a completely nonlinear method and there is no lin-earization during the controller design process. Thus, the nonlinear characteristics of the dynamic system are completely kept and fully utilized.

  8. Global Modeling of CO2 Discharges with Aerospace Applications

    Directory of Open Access Journals (Sweden)

    Chloe Berenguer

    2014-01-01

    Full Text Available We developed a global model aiming to study discharges in CO2 under various conditions, pertaining to a large spectrum of pressure, absorbed energy, and feeding values. Various physical conditions and form factors have been investigated. The model was applied to a case of radiofrequency discharge and to helicon type devices functioning in low and high feed conditions. In general, main charged species were found to be CO2+ for sufficiently low pressure cases and O− for higher pressure ones, followed by CO2+, CO+, and O2+ in the latter case. Dominant reaction is dissociation of CO2 resulting into CO production. Electronegativity, important for radiofrequency discharges, increases with pressure, arriving up to 3 for high flow rates for absorbed power of 250 W, and diminishes with increasing absorbed power. Model results pertaining to radiofrequency type plasma discharges are found in satisfactory agreement with those available from an existing experiment. Application to low and high flow rates feedings cases of helicon thruster allowed for evaluation of thruster functioning conditions pertaining to absorbed powers from 50 W to 1.8 kW. The model allows for a detailed evaluation of the CO2 potential to be used as propellant in electric propulsion devices.

  9. Real-time application of the drag based model

    Science.gov (United States)

    Žic, Tomislav; Temmer, Manuela; Vršnak, Bojan

    2016-04-01

    The drag-based model (DBM) is an analytical model which is usually used for calculating kinematics of coronal mass ejections (CMEs) in the interplanetary space, prediction of the CME arrival times and impact speeds at arbitrary targets in the heliosphere. The main assumption of the model is that beyond a distance of about 20 solar radii from the Sun, the drag is dominant in the interplanetary space. The previous version of DBM relied on the rough assumption of averaged, unperturbed and constant environmental conditions as well as constant CME properties throughout the entire interplanetary CME propagation. The continuation of our work consists of enhancing the model into a form which uses a time dependent and perturbed environment without constraints on CME properties and distance forecasting. The extension provides the possibility of application in various scenarios, such as automatic least-square fitting on initial CME kinematic data suitable for a real-time forecasting of CME kinematics, or embedding the DBM into pre-calculated interplanetary ambient conditions provided by advanced numerical simulations (for example, codes of ENLIL, EUHFORIA, etc.). A demonstration of the enhanced DBM is available on the web-site: http://www.geof.unizg.hr/~tzic/dbm.html. We acknowledge the support of European Social Fund under the "PoKRet" project.

  10. Applicative limitations of sediment transport on predictive modeling in geomorphology

    Institute of Scientific and Technical Information of China (English)

    WEIXiang; LIZhanbin

    2004-01-01

    Sources of uncertainty or error that arise in attempting to scale up the results of laboratory-scale sediment transport studies for predictive modeling of geomorphic systems include: (i) model imperfection, (ii) omission of important processes, (iii) lack of knowledge of initial conditions, (iv) sensitivity to initial conditions, (v) unresolved heterogeneity, (vi) occurrence of external forcing, and (vii) inapplicability of the factor of safety concept. Sources of uncertainty that are unimportant or that can be controlled at small scales and over short time scales become important in large-scale applications and over long time scales. Control and repeatability, hallmarks of laboratory-scale experiments, are usually lacking at the large scales characteristic of geomorphology. Heterogeneity is an important concomitant of size, and tends to make large systems unique. Uniqueness implies that prediction cannot be based upon first-principles quantitative modeling alone, but must be a function of system history as well. Periodic data collection, feedback, and model updating are essential where site-specific prediction is required.

  11. Applicability of statistical learning algorithms in groundwater quality modeling

    Science.gov (United States)

    Khalil, Abedalrazq; Almasri, Mohammad N.; McKee, Mac; Kaluarachchi, Jagath J.

    2005-05-01

    Four algorithms are outlined, each of which has interesting features for predicting contaminant levels in groundwater. Artificial neural networks (ANN), support vector machines (SVM), locally weighted projection regression (LWPR), and relevance vector machines (RVM) are utilized as surrogates for a relatively complex and time-consuming mathematical model to simulate nitrate concentration in groundwater at specified receptors. Nitrates in the application reported in this paper are due to on-ground nitrogen loadings from fertilizers and manures. The practicability of the four learning machines in this work is demonstrated for an agriculture-dominated watershed where nitrate contamination of groundwater resources exceeds the maximum allowable contaminant level at many locations. Cross-validation and bootstrapping techniques are used for both training and performance evaluation. Prediction results of the four learning machines are rigorously assessed using different efficiency measures to ensure their generalization ability. Prediction results show the ability of learning machines to build accurate models with strong predictive capabilities and hence constitute a valuable means for saving effort in groundwater contamination modeling and improving model performance.

  12. Development and application of a hillslope hydrologic model

    Science.gov (United States)

    Blain, C.A.; Milly, P.C.D.

    1991-01-01

    A vertically integrated two-dimensional lateral flow model of soil moisture has been developed. Derivation of the governing equation is based on a physical interpretation of hillslope processes. The lateral subsurface-flow model permits variability of precipitation and evapotranspiration, and allows arbitrary specification of soil-moisture retention properties. Variable slope, soil thickness, and saturation are all accommodated. The numerical solution method, a Crank-Nicolson, finite-difference, upstream-weighted scheme, is simple and robust. A small catchment in northeastern Kansas is the subject of an application of the lateral subsurface-flow model. Calibration of the model using observed discharge provides estimates of the active porosity (0.1 cm3/cm3) and of the saturated horizontal hydraulic conductivity (40 cm/hr). The latter figure is at least an order of magnitude greater than the vertical hydraulic conductivity associated with the silty clay loam soil matrix. The large value of hydraulic conductivity derived from the calibration is suggestive of macropore-dominated hillslope drainage. The corresponding value of active porosity agrees well with a published average value of the difference between total porosity and field capacity for a silty clay loam. ?? 1991.

  13. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    Ali Aytek; M Asce; Murat Alp

    2008-04-01

    This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the evolutionary computation (EC). Two different ANN techniques, the feed forward back propagation (FFBP) and generalized regression neural network (GRNN) methods are compared with one EC method, Gene Expression Programming (GEP) which is a new evolutionary algorithm that evolves computer programs. The daily hydrometeorological data of three rainfall stations and one streamflow station for Juniata River Basin in Pennsylvania state of USA are taken into consideration in the model development. Statistical parameters such as average, standard deviation, coefficient of variation, skewness, minimum and maximum values, as well as criteria such as mean square error (MSE) and determination coefficient (2) are used to measure the performance of the models. The results indicate that the proposed genetic programming (GP) formulation performs quite well compared to results obtained by ANNs and is quite practical for use. It is concluded from the results that GEP can be proposed as an alternative to ANN models.

  14. Development and application of coarse-grained models for lipids

    Science.gov (United States)

    Cui, Qiang

    2013-03-01

    I'll discuss a number of topics that represent our efforts in developing reliable molecular models for describing chemical and physical processes involving biomembranes. This is an exciting yet challenging research area because of the multiple length and time scales that are present in the relevant problems. Accordingly, we attempt to (1) understand the value and limitation of popular coarse-grained (CG) models for lipid membranes with either a particle or continuum representation; (2) develop new CG models that are appropriate for the particular problem of interest. As specific examples, I'll discuss (1) a comparison of atomistic, MARTINI (a particle based CG model) and continuum descriptions of a membrane fusion pore; (2) the development of a modified MARTINI model (BMW-MARTINI) that features a reliable description of membrane/water interfacial electrostatics and its application to cell-penetration peptides and membrane-bending proteins. Motivated specifically by the recent studies of Wong and co-workers, we compare the self-assembly behaviors of lipids with cationic peptides that include either Arg residues or a combination of Lys and hydrophobic residues; in particular, we attempt to reveal factors that stabilize the cubic ``double diamond'' Pn3m phase over the inverted hexagonal HII phase. For example, to explicitly test the importance of the bidentate hydrogen-bonding capability of Arg to the stabilization of negative Gaussian curvature, we also compare results using variants of the BMW-MARTINI model that treat the side chain of Arg with different levels of details. Collectively, the results suggest that both the bidentate feature of Arg and the overall electrostatic properties of cationic peptides are important to the self-assembly behavior of these peptides with lipids. The results are expected to have general implications to the mechanism of peptides and proteins that stimulate pore formation in biomembranes. Work in collaboration with Zhe Wu, Leili Zhang

  15. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  16. Performance Comparison of Two Meta-Model for the Application to Finite Element Model Updating of Structures

    Institute of Scientific and Technical Information of China (English)

    Yang Liu; DeJun Wang; Jun Ma; Yang Li

    2014-01-01

    To investigate the application of meta-model for finite element ( FE) model updating of structures, the performance of two popular meta-model, i.e., Kriging model and response surface model (RSM), were compared in detail. Firstly, above two kinds of meta-model were introduced briefly. Secondly, some key issues of the application of meta-model to FE model updating of structures were proposed and discussed, and then some advices were presented in order to select a reasonable meta-model for the purpose of updating the FE model of structures. Finally, the procedure of FE model updating based on meta-model was implemented by updating the FE model of a truss bridge model with the measured modal parameters. The results showed that the Kriging model was more proper for FE model updating of complex structures.

  17. Soil erosion by water - model concepts and application

    Science.gov (United States)

    Schmidt, Juergen

    2010-05-01

    approaches will be discussed taking account of the models WEPP, EUROSEM, IISEM and EROSION 3D. In order to provide a better representation of spatially heterogeneous catchments in terms of landuse, soil, slope, and rainfall most of recently developed models operate on a grid-cell basis or other kinds of sub-units, each having uniform characteristics. These so-called "Distributed Models" accepts inputs from raster based geographic information system (GIS). The cell-based structure of the models also allows to generate drainage paths by which water and sediment can be routed from the top to the bottom of the respective watershed. One of the open problems in soil erosion modelling refers to the spontaneous generation of erosion rills without the need for pre-existing morphological contours. A promising approach to handle this problem was realized first in the RILLGROW model, which uses a cellular automaton system in order to generate realistic rill patterns. With respect to the above mentioned models selected applications will be presented and discussed regarding their usability for soil and water conservation purposes.

  18. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  19. A review of modeling applications using ROMS model and COAWST system in the Adriatic sea region

    CERN Document Server

    Carniel, Sandro

    2013-01-01

    From the first implementation in its purely hydrodynamic configuration, to the last configuration under the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) system, several specific modelling applications of the Regional Ocean Modelling Systems (ROMS, www.myroms.org) have been put forward within the Adriatic Sea (Italy) region. Covering now a wide range of spatial and temporal scales, they developed in a growing number of fields supporting Integrated Coastal Zone Management (ICZM) and Marine Spatial Planning (MSP) activities in this semi-enclosed sea of paramount importance including the Gulf of Venice. Presently, a ROMS operational implementation provides every day hydrodynamic and sea level 3-days forecasts, while a second one models the most relevant biogeochemical properties, and a third one (two-way coupled with the Simulating Waves Nearshore (SWAN) model) deals with extreme waves forecast. Such operational models provide support to civil and environmental protection activities (e.g., driving su...

  20. GSTARS computer models and their applications, part I: theoretical development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two-dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3.

  1. Investigation of 1H NMR Profile of Vegetarian Human Urine Using ANOVA-based Multi-factor Analysis%素食人群尿液1H NMR代谢轮廓的多因素方差分析

    Institute of Scientific and Technical Information of China (English)

    董继扬; 邓伶莉; CHENG Kian-Kai; GRIFFIN Julian L.; 陈忠

    2011-01-01

    结合方差分析(ANOVA)和偏最小二乘法判别分析(PLS-DA)两种分析技术,对素食和普食人群的尿液1H NMR谱进行分析.利用ANOVA方法将数据矩阵分解为几个独立因素矩阵,滤除干扰因素后,再利用PLS-DA对单因素数据进行建模分析.实验结果表明,ANOVA/PLS-DA方法可以有效地减少饮食因素和性别因素之间的相互影响,使分析结果更具有生物学意义.%In this study, a technique that combined both analysis of variance ( ANOVA) and partial least squares-discriminant analysis (PLS-DA) was used to compare the urine XH NMR spectra of healthy people from a vegetarian and omnivorous population. In ANOVA/PLS-DA, the variation in data was first decomposed into different variance components that each contains a single source of variation. Each of the resulting variance components was then analyzed using PLS-DA. The experimental results showed that ANOVA/PLS-DA is efficient in disentangling the effect of diet and gender on die metabolic profile, and the method could be used to extract biologically relevant information for result interpretation.

  2. Management model application at nested spatial levels in Mediterranean Basins

    Science.gov (United States)

    Lo Porto, Antonio; De Girolamo, Anna Maria; Froebrich, Jochen

    2014-05-01

    In the EU Water Framework Directive (WFD) implementation processes, hydrological and water quality models can be powerful tools that allow to design and test alternative management strategies, as well as judging their general feasibility and acceptance. Although in recent decades several models have been developed, their use in Mediterranean basins, where rivers have a temporary character, is quite complex and there is limited information in literature which can facilitate model applications and result evaluations in this region. The high spatial variability which characterizes rainfall events, soil hydrological properties and land uses of Mediterranean basin makes more difficult to simulate hydrological and water quality in this region than in other Countries. This variability also has several implications in modeling simulations results especially when simulations at different spatial scale are needed for watershed management purpose. It is well known that environmental processes operating at different spatial scale determine diverse impacts on water quality status (hydrological, chemical, ecological). Hence, the development of management strategies have to include both large scale (watershed) and local spatial scales approaches (e.g. stream reach). This paper presents the results of a study which analyzes how the spatial scale affects the results of hydrologic process and water quality of model simulations in a Mediterranean watershed. Several aspects involved in modeling hydrological and water quality processes at different spatial scale for river basin management are investigated including model data requirements, data availability, model results and uncertainty. A hydrologic and water quality model (SWAT) was used to simulate hydrologic processes and water quality at different spatial scales in the Candelaro river basin (Puglia, S-E Italy) and to design management strategies to reach as possible WFD goals. When studying a basin to assess its current status

  3. Testing simulation and structural models with applications to energy demand

    Science.gov (United States)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality

  4. Inverse Problems in Complex Models and Applications to Earth Sciences

    Science.gov (United States)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  5. Simulation Modeling in Plant Breeding: Principles and Applications

    Institute of Scientific and Technical Information of China (English)

    WANG Jian-kang; Wolfgang H Pfeiffer

    2007-01-01

    Conventional plant breeding largely depends on phenotypic selection and breeder's experience, therefore the breeding efficiency is low and the predictions are inaccurate. Along with the fast development in molecular biology and biotechnology, a large amount of biological data is available for genetic studies of important breeding traits in plants,which in turn allows the conduction of genotypic selection in the breeding process. However, gene information has not been effectively used in crop improvement because of the lack of appropriate tools. The simulation approach can utilize the vast and diverse genetic information, predict the cross performance, and compare different selection methods. Thus,the best performing crosses and effective breeding strategies can be identified. QuLine is a computer tool capable of defining a range, from simple to complex genetic models, and simulating breeding processes for developing final advanced lines. On the basis of the results from simulation experiments, breeders can optimize their breeding methodology and greatly improve the breeding efficiency. In this article, the underlying principles of simulation modeling in crop enhancement is initially introduced, following which several applications of QuLine are summarized, by comparing the different selection strategies, the precision parental selection, using known gene information, and the design approach in breeding. Breeding simulation allows the definition of complicated genetic models consisting of multiple alleles, pleiotropy, epistasis, and genes, by environment interaction, and provides a useful tool for breeders, to efficiently use the wide spectrum of genetic data and information available.

  6. Application of modeling to local chemistry in PWR steam generators

    International Nuclear Information System (INIS)

    Localized corrosion of the SG tubes and other components is due to the presence of an aggressive environment in local crevices and occluded regions. In crevices and on vertical and horizontal tube surfaces, corrosion products and particulate matter can accumulate in the form of porous deposits. The SG water contains impurities at extremely low levels (ppb). Low levels of non-volatile impurities, however, can be efficiently concentrated in crevices and sludge piles by a thermal hydraulic mechanism. The temperature gradient across the SG tube coupled with local flow starvation, produces local boiling in the sludge and crevices. Since mass transfer processes are inhibited in these geometries, the residual liquid becomes enriched in many of the species present in the SG water. The resulting concentrated solutions have been shown to be aggressive and can corrode the SG materials. This corrosion may occur under various conditions which result in different types of attack such as pitting, stress corrosion cracking, wastage and denting. A major goal of EPRI's research program has been the development of models of the concentration process and the resulting chemistry. An improved understanding should eventually allow utilities to reduce or eliminate the corrosion by the appropriate manipulation of the steam generator water chemistry and or crevice conditions. The application of these models to experimental data obtained for prototypical SG tube support crevices is described in this paper. The models adequately describe the key features of the experimental data allowing extrapolations to be made to plant conditions. (author)

  7. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  8. Numerical modeling of magnetic moments for UXO applications

    Science.gov (United States)

    Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.

    2006-01-01

    The surface magnetic anomaly observed in UXO clearance is mainly dipolar and, consequently, the dipole is the only magnetic moment regularly recovered in UXO applications. The dipole moment contains information about intensity of magnetization but lacks information about shape. In contrast, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present a 3D numerical modeling study for highly susceptible metallic objects. The basis for the modeling is the solution of a nonlinear integral equation describing magnetization within isolated objects. A solution for magnetization distribution then allows us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at depths typical of UXO burial, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination.

  9. Predictive modeling of addiction lapses in a mobile health application.

    Science.gov (United States)

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M; Isham, Andrew J; Judkins-Fisher, Chris L; Atwood, Amy K; Gustafson, David H

    2014-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-comprehensive health enhancement support system (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who recently completed residential treatment. The Weekly Check-in is a self-monitoring service, provided in A-CHESS, to track patients' recovery progress. The model showed good predictability, with the area under receiver operating characteristic curve of 0.829 in the 10-fold cross-validation and 0.912 in the external validation. The sensitivity/specificity table assists the tradeoff decisions necessary to apply the model in practice. This study moves us closer to the goal of providing lapse prediction so that patients might receive more targeted and timely support. PMID:24035143

  10. Application of WEAP Simulation Model to Hengshui City Water Planning

    Institute of Scientific and Technical Information of China (English)

    OJEKUNLE Z O; ZHAO Lin; LI Manzhou; YANG Zhen; TAN Xin

    2007-01-01

    Like many river basins in China, water resources in the Fudong Pai River are almost fully allocated. This paper seeks to assess and evaluate water resource problems using water evaluation and planning (WEAP) model via its application to Hengshui Basin of Fudong Pai River. This model allows the simulation and analysis of various water allocation scenarios and, above all, scenarios of users' behavior. Water demand management is one of the options discussed in detail. Simulations are proposed for diverse climatic situations from dry years to normal years and results are discussed. Within the limits of data availability, it appears that most water users are not able to meet all their requirements from the river, and that even the ecological reserve will not be fully met during certain years. But the adoption of water demand management procedures offers opportunities for remedying this situation during normal hydrological years. However, it appears that demand management alone will not suffice during dry years. Nevertheless, the ease of use of the model and its user-friendly interfaces make it particularly useful for discussions and dialogue on water resources management among stakeholders.

  11. Application of Molecular Modeling to Urokinase Inhibitors Development

    Directory of Open Access Journals (Sweden)

    V. B. Sulimov

    2014-01-01

    Full Text Available Urokinase-type plasminogen activator (uPA plays an important role in the regulation of diverse physiologic and pathologic processes. Experimental research has shown that elevated uPA expression is associated with cancer progression, metastasis, and shortened survival in patients, whereas suppression of proteolytic activity of uPA leads to evident decrease of metastasis. Therefore, uPA has been considered as a promising molecular target for development of anticancer drugs. The present study sets out to develop the new selective uPA inhibitors using computer-aided structural based drug design methods. Investigation involves the following stages: computer modeling of the protein active site, development and validation of computer molecular modeling methods: docking (SOL program, postprocessing (DISCORE program, direct generalized docking (FLM program, and the application of the quantum chemical calculations (MOPAC package, search of uPA inhibitors among molecules from databases of ready-made compounds to find new uPA inhibitors, and design of new chemical structures and their optimization and experimental examination. On the basis of known uPA inhibitors and modeling results, 18 new compounds have been designed, calculated using programs mentioned above, synthesized, and tested in vitro. Eight of them display inhibitory activity and two of them display activity about 10 μM.

  12. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    Directory of Open Access Journals (Sweden)

    Melike Bildirici

    2014-01-01

    Full Text Available The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100. Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray’s MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray’s MS-GARCH model. Therefore, the models are promising for various economic applications.

  13. Optimization of friction welding by taguchi and ANOVA method on commercial aluminium tube to Al 2025 tube plate with backing block using an external tool

    Energy Technology Data Exchange (ETDEWEB)

    Kanna, S.; Kumaraswamidhs, L. A. [Indian Institute of Technology, Dhanbad (India); Kumaran, S. Senthil [RVS School of Engineering and Technology, Dindigul (India)

    2016-05-15

    The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.

  14. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas; /Royal Inst. Tech., Stockholm

    2008-01-29

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e{sup {+-}}, {nu}{sub e}, {bar {nu}}{sub e}, {nu}{sub {mu}} and {bar {nu}}{sub {mu}}--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the {Delta}(1232) and the other multiple resonances with masses around 1600 MeV/c{sup 2}. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a pencil beam of

  15. Modifications and Applications of the HERMES model: June - October 2010

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2010-11-16

    The HERMES (High Explosive Response to MEchanical Stimulus) model has been developed to describe the response of energetic materials to low-velocity mechanical stimulus, referred to as HEVR (High Explosive Violent Response) or BVR (Burn to Violent Reaction). For tests performed with an HMX-based UK explosive, at sample sizes less than 200 g, the response was sometimes an explosion, but was not observed to be a detonation. The distinction between explosion and detonation can be important in assessing the effects of the HE response on nearby structures. A detonation proceeds as a supersonic shock wave supported by the release of energy that accompanies the transition from solid to high-pressure gas. For military high explosives, the shock wave velocity generally exceeds 7 km/s, and the pressure behind the shock wave generally exceeds 30 GPa. A kilogram of explosive would be converted to gas in 10 to 15 microseconds. An HEVR explosion proceeds much more slowly. Much of the explosive remains unreacted after the event. Peak pressures have been measured and calculated at less than 1 GPa, and the time for the portion of the solid that does react to form gas is about a millisecond. The explosion will, however, launch the confinement to a velocity that depends on the confinement mass, the mass of explosive converted, and the time required to form gas products. In many tests, the air blast signal and confinement velocity are comparable to those measured when an amount of explosive equal to that which is converted in an HEVR is deliberately detonated in the comparable confinement. The number of confinement fragments from an HEVR is much less than from the comparable detonation. The HERMES model comprises several submodels including a constitutive model for strength, a model for damage that includes the creation of porosity and surface area through fragmentation, an ignition model, an ignition front propagation model, and a model for burning after ignition. We have used HERMES

  16. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas [KTH Royal Institute of Technology, Stockholm (Sweden)

    2007-01-01

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e±, ve, $\\bar{v}$e, vμ and $\\bar{μ}$e--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c2. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a

  17. Modelling the application of integrated photonic spectrographs to astronomy

    CERN Document Server

    Harris, R J

    2012-01-01

    One of the well-known problems of producing instruments for Extremely Large Telescopes is that their size (and hence cost) scales rapidly with telescope aperture. To try to break this relation alternative new technologies have been proposed, such as the use of the Integrated Photonic Spectrograph (IPS). Due to their diffraction limited nature the IPS is claimed to defeat the harsh scaling law applying to conventional instruments. The problem with astronomical applications is that unlike conventional photonics, they are not usually fed by diffraction limited sources. This means in order to retain throughput and spatial information the IPS will require multiple Arrayed Waveguide Gratings (AWGs) and a photonic lantern. We investigate the implications of these extra components on the size of the instrument. We also investigate the potential size advantage of using an IPS as opposed to conventional monolithic optics. To do this, we have constructed toy models of IPS and conventional image sliced spectrographs to c...

  18. Autonomic Model for Self-Configuring C#.NET Applications

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advances in computational technologies over the last decade, large organizations have been investing in Information Technology to automate their internal processes to cut costs and efficiently support their business projects. However, this comes to a price. Business requirements always change. Likewise, IT systems constantly evolves as developers make new versions of them, which require endless administrative manual work to customize and configure them, especially if they are being used in different contexts, by different types of users, and for different requirements. Autonomic computing was conceived to provide an answer to these ever-changing requirements. Essentially, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic model based on Venn diagram and set theory for self-configuring C#.NET applications, namely the self-customization of their GUI, ev...

  19. Urban design and modeling: applications and perspectives on GIS

    Directory of Open Access Journals (Sweden)

    Roberto Mingucci

    2013-05-01

    Full Text Available In recent years, GIS systems have evolved because of technological advancements that make possible the simultaneous management of multiple amount of information.Interesting aspects in their application concern the site documentation at the territorial scale taking advantage of CAD/BIM systems, usually working at the building scale instead.In this sense, the survey using sophisticated equipment such as laser scanners or UAV drones quickly captures data that can be enjoyed across even through new “mobile” technologies, operating in the web-based information systems context. This paper aims to investigate use and perspectives pertaining to geographic information technologies, analysis and design tools meant for modeling at different scales, referring to results of research experiences conducted at the University of Bologna.

  20. Nanostructured energetic composites: synthesis, ignition/combustion modeling, and applications.

    Science.gov (United States)

    Zhou, Xiang; Torabi, Mohsen; Lu, Jian; Shen, Ruiqi; Zhang, Kaili

    2014-03-12

    Nanotechnology has stimulated revolutionary advances in many scientific and industrial fields, particularly in energetic materials. Powder mixing is the simplest and most traditional method to prepare nanoenergetic composites, and preliminary findings have shown that these composites perform more effectively than their micro- or macro-sized counterparts in terms of energy release, ignition, and combustion. Powder mixing technology represents only the minimum capability of nanotechnology to boost the development of energetic material research, and it has intrinsic limitations, namely, random distribution of fuel and oxidizer particles, inevitable fuel pre-oxidation, and non-intimate contact between reactants. As an alternative, nanostructured energetic composites can be prepared through a delicately designed process. These composites outperform powder-mixed nanocomposites in numerous ways; therefore, we comprehensively discuss the preparation strategies adopted for nanostructured energetic composites and the research achievements thus far in this review. The latest ignition and reaction models are briefly introduced. Finally, the broad promising applications of nanostructured energetic composites are highlighted.

  1. Application Isssues of the Semi-Markov Reliability Model

    Directory of Open Access Journals (Sweden)

    Rudnicki Jacek

    2015-01-01

    Full Text Available Predicting the reliability of marine internal combustion engines, for instance, is of particular importance, as it makes it possible to predict their future reliability states based on the information on the past states. Correct reliability prediction is a complex process which consists in processing empirical results obtained from operating practice, complemented by analytical considerations. The process of technical state changes of each mechanical device is stochastic and continuous in states and time, hence the need to divide this infinite set of engine states into a finite number of subsets (classes, which can be clearly and permanently identified using the existing diagnosing system. Using the engine piston-crankshaft system as an example, the article presents a proposal for a mathematical model of reliability which, on the one hand, takes into account random nature of phenomena leading to the damage, and at the same time reveals certain application flexibility and the resultant practical usability.

  2. A Model of Cloud Based Application Environment for Software Testing

    CERN Document Server

    Vengattaraman, T; Baskaran, R

    2010-01-01

    Cloud computing is an emerging platform of service computing designed for swift and dynamic delivery of assured computing resources. Cloud computing provide Service-Level Agreements (SLAs) for guaranteed uptime availability for enabling convenient and on-demand network access to the distributed and shared computing resources. Though the cloud computing paradigm holds its potential status in the field of distributed computing, cloud platforms are not yet to the attention of majority of the researchers and practitioners. More specifically, still the researchers and practitioners community has fragmented and imperfect knowledge on cloud computing principles and techniques. In this context, one of the primary motivations of the work presented in this paper is to reveal the versatile merits of cloud computing paradigm and hence the objective of this work is defined to bring out the remarkable significances of cloud computing paradigm through an application environment. In this work, a cloud computing model for sof...

  3. Low Dimensional Semiconductor Structures Characterization, Modeling and Applications

    CERN Document Server

    Horing, Norman

    2013-01-01

    Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included h...

  4. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  5. Application and improvement of Raupach's shear stress partitioning model

    Science.gov (United States)

    Walter, B. A.; Lehning, M.; Gromke, C.

    2012-12-01

    Aeolian processes such as the entrainment, transport and redeposition of sand, soil or snow are able to significantly reshape the earth's surface. In times of increasing desertification and land degradation, often driven by wind erosion, investigations of aeolian processes become more and more important in environmental sciences. The reliable prediction of the sheltering effect of vegetation canopies against sediment erosion, for instance, is a clear practical application of such investigations to identify suitable and sustainable counteractive measures against wind erosion. This study presents an application and improvement of a theoretical model presented by Raupach (Boundary-Layer Meteorology, 1992, Vol.60, 375-395 and Journal of Geophysical Research, 1993, Vol.98, 3023-3029) which allows for quantifying the sheltering effect of vegetation against sediment erosion. The model predicts the shear stress ratios τS'/τ and τS''/τ. Here, τS is the part of the total shear stress τ that acts on the ground beneath the plants. The spatial peak τS'' of the surface shear stress is responsible for the onset of particle entrainment whereas the spatial mean τS' can be used to quantify particle mass fluxes. The precise and accurate prediction of these quantities is essential when modeling wind erosion. Measurements of the surface shear stress distributions τS(x,y) on the ground beneath live vegetation canopies (plant species: lolium perenne) were performed in a controlled wind tunnel environment to determine the model parameters and to evaluate the model performance. Rigid, non-porous wooden blocks instead of the plants were additionally tested for the purpose of comparison, since previous wind tunnel studies used exclusively artificial plant imitations for their experiments on shear stress partitioning. The model constant c, which is needed to determine the total stress τ for a canopy of interest and which remained rather unspecified to date, was found to be c ≈ 0

  6. The modelling of a capacitive microsensor for biosensing applications

    Science.gov (United States)

    Bezuidenhout, P. H.; Schoeman, J.; Joubert, T. H.

    2014-06-01

    Microsensing is a leading field in technology due to its wide application potential, not only in bio-engineering, but in other fields as well. Microsensors have potentially low-cost manufacturing processes, while a single device type can have various uses, and this consequently helps with the ever-growing need to provide better health conditions in rural parts of the world. Capacitive biosensors detect a change in permittivity (or dielectric constant) of a biological material, usually within a parallel plate capacitor structure which is often implemented with integrated electrodes of an inert metal such as gold or platinum on a microfluidic substrate typically with high dielectric constant. There exist parasitic capacitance components in these capacitive sensors, which have large influence on the capacitive measurement. Therefore, they should be considered for the development of sensitive and accurate sensing devices. An analytical model of a capacitive sensor device is discussed, which accounts for these parasitic factors. The model is validated with a laboratory device of fixed geometry, consisting of two parallel gold electrodes on an alumina (Al2O3) substrate mounted on a glass microscope slide, and with a windowed cover layer of poly-dimethyl-siloxane (PDMS). The thickness of the gold layer is 1μm and the electrode spacing is 300μm. The alumina substrate has a thickness of 200μm, and the high relative permittivity of 11.5 is expected to be a significantly contributing factor to the total device capacitance. The 155μm thick PDMS layer is also expected to contribute substantially to the total device capacitance since the relative permittivity for PDMS is 2.7. The wideband impedance analyser evaluation of the laboratory device gives a measurement result of 2pF, which coincides with the model results; while the handheld RLC meter readout of 4pF at a frequency of 10kHz is acceptable within the measurement accuracy of the instrument. This validated model will

  7. CFD Modeling in Development of Renewable Energy Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Chapter 1: A Multi-fluid Model to Simulate Heat and Mass Transfer in a PEM Fuel Cell. Torsten Berning, Madeleine Odgaard, Søren K. Kær Chapter 2: CFD Modeling of a Planar Solid Oxide Fuel Cell (SOFC for Clean Power Generation. Meng Ni Chapter 3: Hydrodynamics and Hydropower in the New Paradigm for a Sustainable Engineering. Helena M. Ramos, Petra A. López-Jiménez Chapter 4: Opportunities for CFD in Ejector Solar Cooling. M. Dennis Chapter 5: Three Dimensional Modelling of Flow Field Around a Horizontal Axis Wind Turbine (HAWT. Chaouki Ghenai, Armen Sargsyan, Isam Janajreh Chapter 6: Scaling Rules for Hydrodynamics and Heat Transfer in Jetting Fluidized-Bed Biomass Gasifiers. K. Zhang, J. Chang, P. Pei, H. Chen, Y. Yang Chapter 7: Investigation of Low Reynolds Number Unsteady Flow around Airfoils in Pitching, Plunging and Flapping Motions. M.R. Amiralaei, H. Alighanbari, S.M. Hashemi Chapter 8: Justification of Computational Fluid Dynamics Simulation for Flat Plate Solar Energy Collector. Mohamed Selmi, Mohammed J. Al-Khawaja, Abdulhamid Marafia Chapter 9: Comparative Performance of a 3-Bladed Airfoil Chord H-Darrieus and a 3-Bladed Straight Chord H-Darrieus Turbines using CFD. R. Gupta, Agnimitra Biswas Chapter 10: Computational Fluid Dynamics for PEM Fuel Cell Modelling. A. Iranzo, F. Rosa Chapter 11: Analysis of the Performance of PEM Fuel Cells: Tutorial of Major Functional and Constructive Characteristics using CFD Analysis. P.J. Costa Branco, J.A. Dente Chapter 12: Application of Techniques of Computational Fluid Dynamics in the Design of Bipolar Plates for PEM Fuel Cells. A.P. Manso, F.F. Marzo, J. Barranco, M. Garmendia Mujika.

  8. X-ray ablation measurements and modeling for ICF applications

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, A.T.

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths ({approx} micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation.

  9. Enhancements to FDTD modeling for optical metrology applications

    Science.gov (United States)

    Salski, Bartlomiej; Celuch, Malgorzata; Gwarek, Wojciech

    2007-06-01

    This paper presents Finite Difference Time Domain (FDTD) method based on discretised Maxwell curl equations and widely used in microwave circuit design - as a promising tool for new optical metrology purposes. We focus on periodic FDTD formulations for scattering problems. The interest in efficient full-wave modelling of periodic structures has arisen due to their increasing applications as slow wave transmission lines, photonic crystals, and metamaterials. Recently, new efforts have been made to incorporate the FDTD algorithms into the scatterometry overlay technology (SCOL) toolkit. In SCOL, multilayered grating targets on silicon wafers are illuminated with polarised light at a particular angle of incidence; reflected signal of the 0 th diffraction order is processed to extract the information about misalignment between grating layers. Since the illumination spot size typically covers tens or even hundreds of grating periods, direct 3D FDTD modelling of such an electrically large problem needs long computing times. The periodic FDTD algorithm discussed herein, built upon Floquet theorem, allows reduction of the modelling problem to one or just a few periods. As a consequence, it substantially speeds up the simulation. The incident wave is modelled as a plane wave. The reflected wave is extracted via near-to-far (NTF) transformation as in antenna analysis. We cross-calibrate the FDTD algorithm against other numerical techniques better established in optical metrology, like Rigorous Coupled Wave Analysis (RCWA). For a benchmark of multilayered rectangular grating composition illuminated with light within the 500 to 700 nm spectrum, we show that the FDTD and RCWA results for the 0 th diffraction order reflection coefficient are in excellent agreement. The FDTD approach is more flexible as it further allows quantitative characterisation of non-rectangular periodic structures, higher-order diffraction rays, and periodicity violation. This work was done in the

  10. Assembly interruptability robustness model with applications to Space Station Freedom

    Science.gov (United States)

    Wade, James William

    1991-02-01

    Interruptability robustness of a construction project together with its assembly sequence may be measured by calculating the probability of its survival and successful completion in the face of unplanned interruptions of the assembly process. Such an interruption may jeopardize the survival of the structure being assembled, the survival of the support equipment, and/or the safety of the members of the construction crew, depending upon the stage in the assembly sequence when the interruption occurs. The interruption may be due to a number of actors such as: machinery break-downs, environmental damage, worker emergency illness or injury, etc. Each source of interruption has a probability of occurring, and adds an associated probability of loss, schedule delay, and cost to the project. Several options may exist for reducing the consequences of an interruption at a given point in the assembly sequence, including altering the assembly sequence, adding extra components or equipment as interruptability 'insurance', increasing the capability of support facilities, etc. Each option may provide a different overall performance of the project as it relates to success, assembly time, and project cost. The Interruptability Robustness Model was devised and provides a method which allows the overall interruptability robustness of construction of a project design and its assembly sequence to be quantified. In addition, it identifies the susceptibility to interruptions for the assembly sequence at all points within the assembly sequence. The model is applied to the present problem of quantifying and improving interruptability robustness during the construction of Space Station Freedom. This application was used as a touchstone for devising the Interruptability Robustness Model. However, the model may be utilized to assist in the analysis of interruptability robustness for other space-related construction projects such as the lunar base and orbital assembly of the manned Mars

  11. FEM application for modelling of PVD coatings properties

    Directory of Open Access Journals (Sweden)

    A. Śliwa

    2010-07-01

    Full Text Available Purpose: The general topic of this paper is problem of determining the internal stresses of composite tool materials with the use of finite element method (FEM. The chemical composition of the investigated materials’ core is corresponding to the M2 high-speed steel and was reinforced with the WC and TiC type hard carbide phases with the growing portions of these phases in the outward direction from the core to the surface. Such composed material was sintered, heat treated and deposited appropriately with (Ti,AlN or Ti(C,N coatings.Design/methodology/approach: Modelling of stresses was performed with the help of finite element method in ANSYS environment, and the experimental values of stresses were determined basing on the X-ray diffraction patterns. The computer simulation results were compared with the experimental results.Findings: Computer aided numerical analysis gives the possibility to select the optimal parameters for coatings covering in PVD process determining the stresses in coatings, employing the finite element method using the ANSYS software.Research limitations/implications: It was confirmed that using of finite element method in stresses modelling occurring in advanced composite materials can be a way for reducing the investigation costs. In order to reach this purpose, it was used in the paper a simplified model of composite materials with division on zones with established physical and mechanical properties. Results reached in this way are satisfying and in slight degree differ from results reached by experimental method.Originality/value: Nowadays the computer simulation is very popular and it is based on the finite element method, which allows to better understand the interdependence between parameters of process and choosing optimal solution. The possibility of application faster and faster calculation machines and coming into being many software make possible the creation of more precise models and more adequate ones to

  12. Mesoscale meteorological modelling for Hong Kong-application of the MC2 model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper describes the set-up and application of a non-hydrostatic Canadian meteorological numerical model (MC2) for mesoscale simulations of wind field and other meteorological parameters over the complex terrain of Hong Kong. Results of the simulations of one case are presented and compared with the results of radiosonde and aircraft measurements. The model is proved capable of predicting high-resolution,three-dimensional fields of wind and other meteorological parameters within the Hong Kong territory, using reasonable computer time and memory resources.

  13. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  14. Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears

    Science.gov (United States)

    Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.

    2004-01-01

     Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.

  15. DAVE: A plug and play model for distributed multimedia application development

    Energy Technology Data Exchange (ETDEWEB)

    Mines, R.F.; Friesen, J.A.; Yang, C.L.

    1994-07-01

    This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as video conferencing, media archival, remote process control, and distance learning.

  16. Political economy models and agricultural policy formation: empirical applicability and relevance for the CAP.

    OpenAIRE

    Zee, van der, J.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developments in the...

  17. A GUIDED SWAT MODEL APPLICATION ON SEDIMENT YIELD MODELING IN PANGANI RIVER BASIN: LESSONS LEARNT

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available The overall objective of this paper is to report on the lessons learnt from applying Soil and Water Assessment Tool (SWAT in a well guided sediment yield modelling study. The study area is the upstream of Pangani River Basin (PRB, the Nyumba Ya Mungu (NYM reservoir catchment, located in the North Eastern part of Tanzania. It should be noted that, previous modeling exercises in the region applied SWAT with preassumption that inter-rill or sheet erosion was the dominant erosion type. In contrast, in this study SWAT model application was guided by results of analysis of high temporal resolution of sediment flow data and hydro-meteorological data. The runoff component of the SWAT model was calibrated from six-years (i.e. 1977¿1982 of historical daily streamflow data. The sediment component of the model was calibrated using one-year (1977-1988 daily sediment loads estimated from one hydrological year sampling programme (between March and November, 2005 rating curve. A long-term period over 37 years (i.e. 1969-2005 simulation results of the SWAT model was validated to downstream NYM reservoir sediment accumulation information. The SWAT model captured 56 percent of the variance (CE and underestimated the observed daily sediment loads by 0.9 percent according to Total Mass Control (TMC performance indices during a normal wet hydrological year, i.e., between November 1, 1977 and October 31, 1978, as the calibration period. SWAT model predicted satisfactorily the long-term sediment catchment yield with a relative error of 2.6 percent. Also, the model has identified erosion sources spatially and has replicated some erosion processes as determined in other studies and field observations in the PRB. This result suggests that for catchments where sheet erosion is dominant SWAT model may substitute the sediment-rating curve. However, the SWAT model could not capture the dynamics of sediment load delivery in some seasons to the catchment outlet.

  18. A GUIDED SWAT MODEL APPLICATION ON SEDIMENT YIELD MODELING IN PANGANI RIVER BASIN: LESSONS LEARNT

    Directory of Open Access Journals (Sweden)

    Preksedis Marco Ndomba

    2008-12-01

    Full Text Available The overall objective of this paper is to report on the lessons learnt from applying Soil and Water Assessment Tool (SWAT in a well guided sediment yield modelling study. The study area is the upstream of Pangani River Basin (PRB, the Nyumba Ya Mungu (NYM reservoir catchment, located in the North Eastern part of Tanzania. It should be noted that, previous modeling exercises in the region applied SWAT with preassumption that inter-rill or sheet erosion was the dominant erosion type. In contrast, in this study SWAT model application was guided by results of analysis of high temporal resolution of sediment flow data and hydro-meteorological data. The runoff component of the SWAT model was calibrated from six-years (i.e. 1977–1982 of historical daily streamflow data. The sediment component of the model was calibrated using one-year (1977–1988 daily sediment loads estimated from one hydrological year sampling programme (between March and November, 2005 rating curve. A long-term period over 37 years (i.e. 1969–2005 simulation results of the SWAT model was validated to downstream NYM reservoir sediment accumulation information. The SWAT model captured 56 percent of the variance (CE and underestimated the observed daily sediment loads by 0.9 percent according to Total Mass Control (TMC performance indices during a normal wet hydrological year, i.e., between November 1, 1977 and October 31, 1978, as the calibration period. SWAT model predicted satisfactorily the long-term sediment catchment yield with a relative error of 2.6 percent. Also, the model has identified erosion sources spatially and has replicated some erosion processes as determined in other studies and field observations in the PRB. This result suggests that for catchments where sheet erosion is dominant SWAT model may substitute the sediment-rating curve. However, the SWAT model could not capture the dynamics of sediment load delivery in some seasons to the catchment outlet.

  19. Application of a theoretical model to evaluate COPD disease management

    Directory of Open Access Journals (Sweden)

    Asin Javier D

    2010-03-01

    Full Text Available Abstract Background Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD on process, intermediate and final outcomes of care in a general practice setting. Methods A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences were obtained from questionnaires and electronic registries. Results Implementation of the programme was associated with significant improvements in dyspnoea (p Conclusions The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.

  20. Mathematical models for foam-diverted acidizing and their applications

    Institute of Scientific and Technical Information of China (English)

    Li Songyan; Li Zhaomin; Lin Riyi

    2008-01-01

    Foam diversion can effectively solve the problem of uneven distribution of acid in layers of different permeabilities during matrix acidizing.Based on gas trapping theory and the mass conservation equation,mathematical models were developed for foam-diverted acidizing,which can be achieved by a foam slug followed by acid injection or by continuous injection of foamed acid.The design method for foam-diverted acidizing was also given.The mathematical models were solved by a computer program.Computed results show that the total formation skin factor,wellhead pressure and bottomhole pressure increase with foam injection,but decrease with acid injection.Volume flow rate in a highpermeability layer decreases,while that in a low-permeability layer increases,thus diverting acid to the low-permeability layer from the high-permeability layer.Under the same formation conditions,for foamed acid treatment the operation was longer,and wellhead and bottomhole pressures are higher.Field application shows that foam slug can effectively block high permeability layers,and improve intake profile noticeably.