WorldWideScience

Sample records for anova models application

  1. Model comparison in ANOVA.

    Science.gov (United States)

    Rouder, Jeffrey N; Engelhardt, Christopher R; McCabe, Simon; Morey, Richard D

    2016-12-01

    Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of F-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA F-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.

  2. A Bayesian subgroup analysis using collections of ANOVA models.

    Science.gov (United States)

    Liu, Jinzhong; Sivaganesan, Siva; Laud, Purushottam W; Müller, Peter

    2017-03-20

    We develop a Bayesian approach to subgroup analysis using ANOVA models with multiple covariates, extending an earlier work. We assume a two-arm clinical trial with normally distributed response variable. We also assume that the covariates for subgroup finding are categorical and are a priori specified, and parsimonious easy-to-interpret subgroups are preferable. We represent the subgroups of interest by a collection of models and use a model selection approach to finding subgroups with heterogeneous effects. We develop suitable priors for the model space and use an objective Bayesian approach that yields multiplicity adjusted posterior probabilities for the models. We use a structured algorithm based on the posterior probabilities of the models to determine which subgroup effects to report. Frequentist operating characteristics of the approach are evaluated using simulation. While our approach is applicable in more general cases, we mainly focus on the 2 × 2 case of two covariates each at two levels for ease of presentation. The approach is illustrated using a real data example.

  3. Factor selection and structural identification in the interaction ANOVA model.

    Science.gov (United States)

    Post, Justin B; Bondell, Howard D

    2013-03-01

    When faced with categorical predictors and a continuous response, the objective of an analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times, these tasks are done separately using Analysis of Variance (ANOVA) followed by a post hoc hypothesis testing procedure such as Tukey's Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also to equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This article introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example.

  4. Smoothing spline ANOVA frailty model for recurrent event data.

    Science.gov (United States)

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.

  5. Analysis of variance (ANOVA) models in lower extremity wounds.

    Science.gov (United States)

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  6. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    Science.gov (United States)

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  7. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  8. Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA

    KAUST Repository

    Jung, Yoonsuh

    2014-10-02

    In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.

  9. INFLUENCE OF TECHNOLOGICAL PARAMETERS ON AGROTEXTILES WATER ABSORBENCY USING ANOVA MODEL

    Directory of Open Access Journals (Sweden)

    LUPU Iuliana G.

    2016-05-01

    Full Text Available Agrotextiles are now days extensively being used in horticulture, farming and other agricultural activities. Agriculture and textiles are the largest industries in the world providing basic needs such as food and clothing. Agrotextiles plays a significant role to help control environment for crop protection, eliminate variations in climate, weather change and generate optimum condition for plant growth. Water absorptive capacity is a very important property of needle-punched nonwovens used as irrigation substrate in horticulture. Nonwovens used as watering substrate distribute water uniformly and act as slight water buffer owing to the absorbent capacity. The paper analyzes the influence of needling process parameters on water absorptive capacity of needle-punched nonwovens by using ANOVA model. The model allows the identification of optimal action parameters in a shorter time and with less material expenses than by experimental research. The frequency of needle board and needle depth penetration has been used as independent variables while the water absorptive capacity as dependent variable for ANOVA regression model. Based on employed ANOVA model we have established that there is a significant influence of needling parameters on water absorbent capacity. The higher of depth needle penetration and needle board frequency, the higher is the compactness of fabric. A less porous structure has a lower water absorptive capacity.

  10. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    Science.gov (United States)

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  11. Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat

    Directory of Open Access Journals (Sweden)

    Philip W. Iversen

    2004-12-01

    Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.

  12. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  13. Generalized F test and generalized deviance test in two-way ANOVA models for randomized trials.

    Science.gov (United States)

    Shen, Juan; He, Xuming

    2014-01-01

    We consider the problem of detecting treatment effects in a randomized trial in the presence of an additional covariate. By reexpressing a two-way analysis of variance (ANOVA) model in a logistic regression framework, we derive generalized F tests and generalized deviance tests, which provide better power in detecting common location-scale changes of treatment outcomes than the classical F test. The null distributions of the test statistics are independent of the nuisance parameters in the models, so the critical values can be easily determined by Monte Carlo methods. We use simulation studies to demonstrate how the proposed tests perform compared with the classical F test. We also use data from a clinical study to illustrate possible savings in sample sizes.

  14. Assessing and Evaluating UBT Model of Student Management Information System using ANOVA

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2016-08-01

    Full Text Available The research study focus is set in assessing the efficiency and impact of Student Management Information System in facilitating students during registration and examination periods in the aspect of efficiency, performance and usability. As Case Study is chosen University for Business and Technology-UBT, where the system has been designed and tailored to, implemented, tested, evaluated and re-engineered. The analyses tries to identify whether the developed UBT model shows improvement in student services, data centralization, data security, and the entire process of student eservices. Through the Case Study investigated several impacting factors. Afterwards evaluated the usability and user-friendliness of the developed UBT model and solutions of Management Information System and used ANOVA regression analyses to determine the impact. Insights and recommendations are provided.

  15. Smoothing spline ANOVA decomposition of arbitrary splines: an application to eye movements in reading.

    Science.gov (United States)

    Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias

    2015-01-01

    The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading.

  16. Tests for ANOVA models with a combination of crossed and nested designs under heteroscedasticity

    Science.gov (United States)

    Xu, Liwen; Tian, Maozai

    2016-06-01

    In this article we consider unbalanced ANOVA models with a combination of crossed and nested designs under heteroscedasticity. For the problem of testing no nested interaction effects, we propose two tests based on a parametric bootstrap (PB) approach and a generalized p-value approach, respectively. The PB test does not depend on the chosen weights used to define the parameters uniquely. These two tests are compared through their simulated Type I error rates and powers. The simulations indicate that the PB test outperforms the generalized p-value test. The PB test performs very satisfactorily even for extensive cases of samples while the generalized p-value test has Type I error rates much less than the nominal level most of the time. Both tests exhibit similar power properties provided the Type I error rates are close to each other. In some cases, the GF test appears to be more powerful than the PB tests because of its inflated Type I error rates.

  17. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  18. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Science.gov (United States)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  19. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA desig

  20. Application of Anova on Fly Ash Leaching Kinetics for Value Addition

    Science.gov (United States)

    Swain, Ranjita; Mohapatro, Rudra Narayana; Bhima Rao, Raghupatruni

    2016-04-01

    Fly ash is a major problem in power plant sectors as it is dumped at the plant site. Fly ash generation increases day to day due to rapid growth of steel industries. Ceramic/refractory industries are growing rapidly because of more number of steel industries. The natural resources of the ceramic/refractory raw materials are depleting with time due to its consumption. In view of this, fly ash from thermal power plant has been identified for use in the ceramic/refractory industries after suitable beneficiation. In this paper, sample was collected from the ash pond of Vedanta. Particle size (d80 passing size) of the sample is around 150 micron. The chemical analysis of the sample shows that 3.9 % of Fe2O3 and CaO is more than 10 %. XRD patterns show that the fly ash samples consist predominantly of the crystalline phases of quartz, hematite and magnetite in a matrix of aluminosilicate glass. Leaching of iron oxide is 98.3 % at 3 M HCl concentration at 90 °C for 270 min of leaching time. Kinetic study on leaching experiment was carried out. ANOVA software is utilized for curve fitting and the process is optimized using MATLAB 7.1. The detailed study of properties for ceramic material is compared with the standard ceramic materials. The product contains 0.3 % of iron. The other properties of the product have established the fact that the product obtained can be a raw material for ceramic industries.

  1. Tests of additivity in mixed and fixed effect two-way ANOVA models with single sub-class numbers

    OpenAIRE

    Rasch, Dieter; Rusch, Thomas; Simeckova, Marie; Kubinger, Klaus D.; Moder, Karl; Simecek, Petr

    2009-01-01

    In variety testing as well as in psychological assessment, the situation occurs that in a two-way ANOVA-type model with only one replication per cell, analysis is done under the assumption of no interaction between the two factors. Tests for this situation are known only for fixed factors and normally distributed outcomes. In the following we will present five additivity tests and apply them to fixed and mixed models and to quantitative as well as to Bernoulli distributed data....

  2. Structural equation modeling and nested ANOVA: Effects of lead exposure on maternal and fetal growth in rats

    Energy Technology Data Exchange (ETDEWEB)

    Hamilton, J.D. (Rohm and Haas Company, Spring House, PA (United States)); O' Flaherty, E.J.; Shukla, R.; Gartside, P.S. (Univ. of Cincinnati, OH (United States)); Ross, R. (Univ. of Cincinnati Medical Center, Cincinnati, OH (United States))

    1994-01-01

    This study provided an assessment of the effects of lead on early growth in rats based on structural equation modeling and nested analysis of variance (ANOVA). Structural equation modeling showed that lead in drinking water (250, 500, or 1000 ppm) had a direct negative effect on body weight and tail length (i.e., growth) in female rats during the first week of exposure. During the following 2 weeks of exposure, high correlation between growth measurements taken over time resulted in reduced early postnatal growth. By the fourth week of exposure, reduced growth was not evident. Mating began after 8 weeks of exposure, and exposure continued during gestation. Decreased fetal body weight was detected when the effects of litter size, intrauterine position, and sex were controlled in a nested ANOVA. Lead exposure did not appear to affect fetal skeletal development, possibly because lead did not alter maternal serum calcium and phosphorus levels. The effect of lead on individual fetal body weight suggests that additional studies are needed to examine the effect of maternal lead exposure on fetal development and early postnatal growth. 24 refs., 4 figs., 6 tabs.

  3. Simultaneous Optimality of LSE and ANOVA Estimate in General Mixed Models

    Institute of Scientific and Technical Information of China (English)

    Mi Xia WU; Song Gui WANG; Kai Fun YU

    2008-01-01

    Problems of the simultaneous optimal estimates and the optimal tests in general mixed models are considered.A necessary and sufficient condition is presented for the least squares estimate of the fixed effects and the analysis of variance (Hendreson III's) estimate of variance components being uniformly minimum variance unbiased estimates simultaneously.This result can be applied to the problems of finding uniformly optimal unbiased tests and uniformly most accurate unbiased confidential interval on parameters of interest,and for finding equivalences of several common estimates of variance components.

  4. Introducing ANOVA and ANCOVA a GLM approach

    CERN Document Server

    Rutherford, Andrew

    2000-01-01

    Traditional approaches to ANOVA and ANCOVA are now being replaced by a General Linear Modeling (GLM) approach. This book begins with a brief history of the separate development of ANOVA and regression analyses and demonstrates how both analysis forms are subsumed by the General Linear Model. A simple single independent factor ANOVA is analysed first in conventional terms and then again in GLM terms to illustrate the two approaches. The text then goes on to cover the main designs, both independent and related ANOVA and ANCOVA, single and multi-factor designs. The conventional statistical assumptions underlying ANOVA and ANCOVA are detailed and given expression in GLM terms. Alternatives to traditional ANCO

  5. Are droughts occurrence and severity aggravating? A study on SPI drought class transitions using loglinear models and ANOVA-like inference

    Directory of Open Access Journals (Sweden)

    E. E. Moreira

    2011-12-01

    Full Text Available Long time series (95 to 135 yr of the Standardized Precipitation Index (SPI computed with the 12-month time scale relative to 10 locations across Portugal were studied with the aim of investigating if drought frequency and severity are changing through time. Considering four drought severity classes, time series of drought class transitions were computed and later divided into 4 or 5 sub-periods according to length of time series. Drought class transitions were calculated to form a 2-dimensional contingency table for each period. Two-dimensional loglinear models were fitted to these contingency tables and an ANOVA-like inference was then performed in order to investigate differences relative to drought class transitions among those sub-periods, which were considered as treatments of only one factor. The application of ANOVA-like inference to these data allowed to compare the four or five sub-periods in terms of probabilities of transition between drought classes, which were used to detect a possible trend in time evolution of droughts frequency and severity that could be related to climate change. Results for a number of locations show some similarity between the first, third and fifth period (or the second and the fourth if there were only 4 sub-periods regarding the persistency of severe/extreme and sometimes moderate droughts. In global terms, results do not support the assumption of a trend for progressive aggravation of droughts occurrence during the last century, but rather suggest the existence of long duration cycles.

  6. Application of ANOVA and Taguchi-based Mutation Particle Swarm Algorithm for Parameters Design of Multi-hole Extrusion Process

    Directory of Open Access Journals (Sweden)

    Wen-Jong Chen

    2013-08-01

    Full Text Available This study presents the Taguchi method and the Particle Swarm Optimization (PSO technique which uses mutation (MPSO and dynamic inertia weight to determine the best ranges of process parameters (extrusion velocity, eccentricity ratio, billet temperature and friction coefficient at the die interface for a multi-hole extrusion process. A L18(21×37 array, signal-to-noise (S/N ratios and analysis of variance (ANOVA at 99% confidence level were used to indicate the optimum levels and the effect of the process parameters with consideration of mandrel eccentricity angle and exit tube bending angle. As per the Taguchi-based MPSO algorithm using DEFORMTM 3D Finite Element Analysis (FEA software, the minimum mandrel eccentricity and exit tube bending angles were respectively calculated to be 0.03°, which are significantly less than those based on Genetic Algorithm (GA and the Taguchi method, respectively. This indicates that the Taguchi-based MPSO algorithm can effectively and remarkably reduce the warp angles of Ti-6Al-4V extruded products and the billet temperature is the most influencing parameter. The results of this study can be extended to multi-hole extrusion beyond four holes and employed as a predictive tool to forecast the optimal parameters of the multi-hole extrusion process.

  7. ANOVA and ANCOVA A GLM Approach

    CERN Document Server

    Rutherford, Andrew

    2012-01-01

    Provides an in-depth treatment of ANOVA and ANCOVA techniques from a linear model perspective ANOVA and ANCOVA: A GLM Approach provides a contemporary look at the general linear model (GLM) approach to the analysis of variance (ANOVA) of one- and two-factor psychological experiments. With its organized and comprehensive presentation, the book successfully guides readers through conventional statistical concepts and how to interpret them in GLM terms, treating the main single- and multi-factor designs as they relate to ANOVA and ANCOVA. The book begins with a brief history of the separate dev

  8. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Paul M. Macey

    2016-07-01

    Full Text Available We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA. The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI volumes-of-interest (VOI during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function “lme”. Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex

  9. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  10. 3-way Anova Interactions: Deconstructed

    OpenAIRE

    Phil Ender

    2008-01-01

    Three approaches to understanding 3-way anova interactions will be presented: 1) a conceptual approach, 2) an anova approach and 3) a regression approach using dummy coding. The three approaches are illustrated through the use of a synthetic dataset with a significant 3-way interaction.

  11. ANOVA like analysis for structured families of stochastic matrices

    Science.gov (United States)

    Dias, Cristina; Santos, Carla; Varadinov, Maria; Mexia, João T.

    2016-12-01

    Symmetric stochastic matrices width a width a dominant eigenvalue λ and the corresponding eigenvector α appears in many applications. Such matrices can be written as M =λ α αt+E¯. Thus β = λ α will be the structure vector. When the matrices in such families correspond to the treatments of a base design we can carry out a ANOVA like analysis of the action of the treatments in the model on the structured vectors. This analysis can be transversal-when we worked width homologous components and - longitudinal when we consider contrast on the components of each structure vector. The analysis will be briefly considered at the end of our presentation.

  12. Teaching Principles of Inference with ANOVA

    Science.gov (United States)

    Tarlow, Kevin R.

    2016-01-01

    Analysis of variance (ANOVA) is a test of "mean" differences, but the reference to "variances" in the name is often overlooked. Classroom activities are presented to illustrate how ANOVA works with emphasis on how to think critically about inferential reasoning.

  13. 混合效应模型下ANOVA估计和SD估计相等的充分条件%Some Sufficient Conditions for the Identity of ANOVA Estimator and SD Estimator in Mixed-Effects Models

    Institute of Scientific and Technical Information of China (English)

    吴密霞; 赵延

    2014-01-01

    混合效应模型是统计模型中非常重要的一类模型,广泛地应用到许多领域.本文比较了该模型下方差分量的两种估计:方差分析(ANOVA)估计和谱分解(SD)估计,借助吴密霞和王松桂[A new method of spectral decomposition of covariance matrix in mixed effects models and its applications,Sci.China,Ser.A,2005,48:1451-1464]协方差矩阵的谱分解结果,给出了ANOVA估计和SD估计相等的两个充分条件及其相应的统计性质,并将以上的结果应用于圆形部件数据模型和混合方差分析模型.

  14. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  15. Sequential experimental design based generalised ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  16. 关于谱分解估计和方差分析估计在线性混合模型中的比较%On Comparison of Spectral Decomposition Estimate and ANOVAE in Linear Mixed Model

    Institute of Scientific and Technical Information of China (English)

    史建红

    2003-01-01

    在文献[1]中提出的谱分解估计是一种在线性混合模型中同时估计固定效应和方差分量的新方法.在本文中,我们对带有两个方差分量的线性混合模型进行了谱分解估计和方差分析估计的比较.得出了方差分量的这两种估计在某些条件下方差相等,而且谱分解估计具有一些方差分析估计的最优性.%The spectral decomposition estimate(SDE) proposed by Wang and Yin[1] is a new method of simultaneously estimating fixed effects and variance components in linear mixed models. In this paper,we compare the SDE with ANOVAE in the linear mixed models with two variance components. Our results show that these two estimates of variance components have equal variance under some conditions. Thus the SDE shares some optimalities of the ANOVAE.

  17. The engineering analysis of bioheat equation and penile hemodynamic relationships in the diagnosis of erectile dysfunction: part II-model optimization using the ANOVA and Taguchi method.

    Science.gov (United States)

    Ng, E Y K; Ng, W K; Huang, J; Tan, Y K

    2008-01-01

    The authors aimed to study the skin surface bioheat perfusion model described in part I numerically. The influence of each constituent in the determination of surface temperature profile was statistically examined. The theoretically derived data will then be benchmarked with clinically measured data to develop the artificial intelligence system for the diagnosis of erectile dysfunction (ED). The new approach is based on the hypothesis that there exists a constitutive relationship between surface temperature profiles and the etiology of ED. By considering the penis model as a group of reservoirs with irregular cavities, we built a numerical model, simplified to save computational costs while still realistically able to represent the actual for partial differential calculation. Incompressible blood flow was assumed coupled with the classical bioheat transfer equation which was solved using the finite element method. Isotropic homogeneous heat diffusivity was assigned to each tissue layer. The results of simulations were tested for sensitivity analysis and further optimized to obtain the 'best' signal from the simulations using the Taguchi method. Four important parameters were identified and analysis of variance was performed using the 2(n) design (n=number of parameters, in this case, 4). The implications of these parameters were hypothesized based on physiological observations. Our results show that for an optimum signal-to-noise (S/N) ratio, the noise factors (thermal conductivity of skin, A and tunica albuginea, B) must be set high and low, respectively. Hence, at this setting, the signal will be captured based on the perfusion rate of the boundary layer of the sinusoidal space and the blood pressure (perfusion of sinusoidal space, C and blood pressure, D) will be optimal as their S/N ratios (C (low) and D (low)) are larger than the former.

  18. ANOVA for the behavioral sciences researcher

    CERN Document Server

    Cardinal, Rudolf N

    2013-01-01

    This new book provides a theoretical and practical guide to analysis of variance (ANOVA) for those who have not had a formal course in this technique, but need to use this analysis as part of their research.From their experience in teaching this material and applying it to research problems, the authors have created a summary of the statistical theory underlying ANOVA, together with important issues, guidance, practical methods, references, and hints about using statistical software. These have been organized so that the student can learn the logic of the analytical techniques but also use the

  19. Multiple comparison analysis testing in ANOVA.

    Science.gov (United States)

    McHugh, Mary L

    2011-01-01

    The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.

  20. ANOVA like analysis of cancer death age

    Science.gov (United States)

    Areia, Aníbal; Mexia, João T.

    2016-06-01

    We use ANOVA to study the influence of year, sex, country and location on the average cancer death age. The data used was from the World Health Organization (WHO) files for 1999, 2003, 2007 and 2011. The locations considered were: kidney, leukaemia, melanoma of skin and oesophagus and the countries: Portugal, Norway, Greece and Romania.

  1. Detection of epigenetic changes using ANOVA with spatially varying coefficients.

    Science.gov (United States)

    Guanghua, Xiao; Xinlei, Wang; Quincey, LaPlant; Nestler, Eric J; Xie, Yang

    2013-03-13

    Identification of genome-wide epigenetic changes, the stable changes in gene function without a change in DNA sequence, under various conditions plays an important role in biomedical research. High-throughput epigenetic experiments are useful tools to measure genome-wide epigenetic changes, but the measured intensity levels from these high-resolution genome-wide epigenetic profiling data are often spatially correlated with high noise levels. In addition, it is challenging to detect genome-wide epigenetic changes across multiple conditions, so efficient statistical methodology development is needed for this purpose. In this study, we consider ANOVA models with spatially varying coefficients, combined with a hierarchical Bayesian approach, to explicitly model spatial correlation caused by location-dependent biological effects (i.e., epigenetic changes) and borrow strength among neighboring probes to compare epigenetic changes across multiple conditions. Through simulation studies and applications in drug addiction and depression datasets, we find that our approach compares favorably with competing methods; it is more efficient in estimation and more effective in detecting epigenetic changes. In addition, it can provide biologically meaningful results.

  2. Best practices for repeated measures ANOVAs of ERP data: Reference, regional channels, and robust ANOVAs.

    Science.gov (United States)

    Dien, Joseph

    2017-01-01

    Analysis of variance (ANOVA) is a fundamental procedure for event-related potential (ERP) research and yet there is very little guidance for best practices. It is important for the field to develop evidence-based best practices: 1) to minimize the Type II error rate by maximizing statistical power, 2) to minimize the Type I error rate by reducing the latitude for varying procedures, and 3) to identify areas for further methodological improvements. While generic treatments of ANOVA methodology are available, ERP datasets have many unique characteristics that must be considered. In the present report, a novelty oddball dataset was utilized as a test case to determine whether three aspects of ANOVA procedures as applied to ERPs make a real-world difference: the effects of reference site, regional channels, and robust ANOVAs. Recommendations are provided for best practices in each of these areas.

  3. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  4. Scaling in ANOVA-simultaneous component analysis.

    Science.gov (United States)

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  5. A Simple and Transparent Alternative to Repeated Measures ANOVA

    Directory of Open Access Journals (Sweden)

    James W. Grice

    2015-09-01

    Full Text Available Observation Oriented Modeling is a novel approach toward conceptualizing and analyzing data. Compared with traditional parametric statistics, Observation Oriented Modeling is more intuitive, relatively free of assumptions, and encourages researchers to stay close to their data. Rather than estimating abstract population parameters, the overarching goal of the analysis is to identify and explain distinct patterns within the observations. Selected data from a recent study by Craig et al. were analyzed using Observation Oriented Modeling; this analysis was contrasted with a traditional repeated measures ANOVA assessment. Various pitfalls in traditional parametric analyses were avoided when using Observation Oriented Modeling, including the presence of outliers and missing data. The differences between Observation Oriented Modeling and various parametric and nonparametric statistical methods were finally discussed.

  6. Understanding one-way ANOVA using conceptual figures.

    Science.gov (United States)

    Kim, Tae Kyun

    2017-02-01

    Analysis of variance (ANOVA) is one of the most frequently used statistical methods in medical research. The need for ANOVA arises from the error of alpha level inflation, which increases Type 1 error probability (false positive) and is caused by multiple comparisons. ANOVA uses the statistic F, which is the ratio of between and within group variances. The main interest of analysis is focused on the differences of group means; however, ANOVA focuses on the difference of variances. The illustrated figures would serve as a suitable guide to understand how ANOVA determines the mean difference problems by using between and within group variance differences.

  7. Linear Bayes estimator of parameters and its superiorities for one-way ANOVA model%单向分类方差分析模型中参数的Bayes估计及其优良性

    Institute of Scientific and Technical Information of China (English)

    童楠; 韦来生

    2008-01-01

    对平衡的单向分类方差分析(ANOVA)模型导出了效应参数向量可估函数的线性Bayes无偏估计(LBUE),并在均方误差矩阵(MSEM)准则、predictive Pitman closeness(PRPC)准则和posterior Pitman closeness(PPC)准则下分别讨论了它相对于最小二乘估计(LSE)的优良性.

  8. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  9. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Science.gov (United States)

    Liao, Qifeng; Lin, Guang

    2016-07-01

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  10. A marginal-mean ANOVA approach for analyzing multireader multicase radiological imaging data.

    Science.gov (United States)

    Hillis, Stephen L

    2014-01-30

    The correlated-error ANOVA method proposed by Obuchowski and Rockette (OR) has been a useful procedure for analyzing reader-performance outcomes, such as the area under the receiver-operating-characteristic curve, resulting from multireader multicase radiological imaging data. This approach, however, has only been formally derived for the test-by-reader-by-case factorial study design. In this paper, I show that the OR model can be viewed as a marginal-mean ANOVA model. Viewing the OR model within this marginal-mean ANOVA framework is the basis for the marginal-mean ANOVA approach, the topic of this paper. This approach (1) provides an intuitive motivation for the OR model, including its covariance-parameter constraints; (2) provides easy derivations of OR test statistics and parameter estimates, as well as their distributions and confidence intervals; and (3) allows for easy generalization of the OR procedure to other study designs. In particular, I show how one can easily derive OR-type analysis formulas for any balanced study design by following an algorithm that only requires an understanding of conventional ANOVA methods.

  11. Discovering gene expression patterns in time course microarray experiments by ANOVA-SCA.

    NARCIS (Netherlands)

    M.J. Nueda; A. Conessa; J.A. Westerhuis; H.C.J. Hoefsloot; A.K. Smilde; M. Talon; A. Ferrer

    2007-01-01

    In this work, we develop the application of the Analysis of variance-simultaneous component analysis (ANOVA-SCA) Smilde et al. Bioinformatics, (2005) to the analysis of multiple series time course microarray data as an example of multifactorial gene expression profiling experiments. We denoted this

  12. Are multiple contrast tests superior to the ANOVA?

    Science.gov (United States)

    Konietschke, Frank; Bösiger, Sandra; Brunner, Edgar; Hothorn, Ludwig A

    2013-08-01

    Multiple contrast tests can be used to test arbitrary linear hypotheses by providing local and global test decisions as well as simultaneous confidence intervals. The ANOVA-F-test on the contrary can be used to test the global null hypothesis of no treatment effect. Thus, multiple contrast tests provide more information than the analysis of variance (ANOVA) by offering which levels cause the significance. We compare the exact powers of the ANOVA-F-test and multiple contrast tests to reject the global null hypothesis. Hereby, we compute their least favorable configurations (LFCs). It turns out that both procedures have the same LFCs under certain conditions. Exact power investigations show that their powers are equal to detect their LFCs.

  13. Empirical Likelihood-Based ANOVA for Trimmed Means

    Science.gov (United States)

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  14. GENERALIZED CONFIDENCE REGIONS OF FIXED EFFECTS IN THE TWO-WAY ANOVA

    Institute of Scientific and Technical Information of China (English)

    Weiyan MU; Shifeng XIONG; Xingzhong XU

    2008-01-01

    The authors discuss the unbalanced two-way ANOVA model under heteroscedasticity. By taking the generalized approach, the authors derive the generalized p-values for testing the equality of fixed effects and the generalized confidence regions for these effects. The authors also provide their frequentist properties in large-sample cases. Simulation studies show that the generalized confidence regions have good coverage probabilities.

  15. Model Theory and Applications

    CERN Document Server

    Mangani, P

    2011-01-01

    This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.

  16. Parametric optimization for tumour identification: bioheat equation using ANOVA and the Taguchi method.

    Science.gov (United States)

    Sudharsan, N M; Ng, E Y

    2000-01-01

    Breast cancer is the number one killer disease among women. It is known that early detection of a tumour ensures better prognosis and higher survival rate. In this paper an intelligent, inexpensive and non-invasive diagnostic tool is developed for aiding breast cancer detection objectively. This tool is based on thermographic scanning of the breast surface in conjunction with numerical simulation of the breast using the bioheat equation. The medical applications of thermographic scanning make use of the skin temperature as an indication of an underlying pathological process. The thermal pattern over a breast tumour reflects the vascular reaction to the abnormality. Hence an abnormal temperature pattern may be an indicator of an underlying tumour. Seven important parameters are identified and analysis of variance (ANOVA) is performed using a 2n design (n = number of parameters, 7). The effect and importance of the various parameters are analysed. Based on the above 2(7) design, the Taguchi method is used to optimize the parameters in order to ensure the signal from the tumour maximized compared with the noise from the other factors. The model predicts that the ideal setting for capturing the signal from the tumour is when the patient is at basal metabolic activity with a correspondingly lower subcutaneous perfusion in a low temperature environment.

  17. Characterization of near-infrared spectral variance in the authentication of skim and nonfat dry milk powder collection using ANOVA-PCA, pooled-ANOVA, and partial least-squares regression.

    Science.gov (United States)

    Harnly, James M; Harrington, Peter B; Botros, Lucy L; Jablonski, Joseph; Chang, Claire; Bergana, Marti Mamula; Wehling, Paul; Downey, Gerard; Potts, Alan R; Moore, Jeffrey C

    2014-08-13

    Forty-one samples of skim milk powder (SMP) and nonfat dry milk (NFDM) from 8 suppliers, 13 production sites, and 3 processing temperatures were analyzed by NIR diffuse reflectance spectrometry over a period of 3 days. NIR reflectance spectra (1700-2500 nm) were converted to pseudoabsorbance and examined using (a) analysis of variance-principal component analysis (ANOVA-PCA), (b) pooled-ANOVA based on data submatrices, and (c) partial least-squares regression (PLSR) coupled with pooled-ANOVA. ANOVA-PCA score plots showed clear separation of the samples with respect to milk class (SMP or NFDM), day of analysis, production site, processing temperature, and individual samples. Pooled-ANOVA provided statistical levels of significance for the separation of the averages, some of which were many orders of magnitude below 10⁻³. PLSR showed that the correlation with Certificate of Analysis (COA) concentrations varied from a weak coefficient of determination (R²) of 0.32 for moisture to moderate R² values of 0.61 for fat and 0.78 for protein for this multinational study. In this study, pooled-ANOVA was applied for the first time to PLS modeling and demonstrated that even though the calibration models may not be precise, the contribution of the protein peaks in the NIR spectra accounted for the largest proportion of the variation despite the inherent imprecision of the COA values.

  18. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed and sel...

  19. One-way ANOVA based on interval information

    Science.gov (United States)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  20. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed......, as well as the high quality of the results presented in these accepted papers, demonstrate the maturity and vibrancy of the field....

  1. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    Science.gov (United States)

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  2. A hybrid anchored-ANOVA - POD/Kriging method for uncertainty quantification in unsteady high-fidelity CFD simulations

    Science.gov (United States)

    Margheri, Luca; Sagaut, Pierre

    2016-11-01

    To significantly increase the contribution of numerical computational fluid dynamics (CFD) simulation for risk assessment and decision making, it is important to quantitatively measure the impact of uncertainties to assess the reliability and robustness of the results. As unsteady high-fidelity CFD simulations are becoming the standard for industrial applications, reducing the number of required samples to perform sensitivity (SA) and uncertainty quantification (UQ) analysis is an actual engineering challenge. The novel approach presented in this paper is based on an efficient hybridization between the anchored-ANOVA and the POD/Kriging methods, which have already been used in CFD-UQ realistic applications, and the definition of best practices to achieve global accuracy. The anchored-ANOVA method is used to efficiently reduce the UQ dimension space, while the POD/Kriging is used to smooth and interpolate each anchored-ANOVA term. The main advantages of the proposed method are illustrated through four applications with increasing complexity, most of them based on Large-Eddy Simulation as a high-fidelity CFD tool: the turbulent channel flow, the flow around an isolated bluff-body, a pedestrian wind comfort study in a full scale urban area and an application to toxic gas dispersion in a full scale city area. The proposed c-APK method (anchored-ANOVA-POD/Kriging) inherits the advantages of each key element: interpolation through POD/Kriging precludes the use of quadrature schemes therefore allowing for a more flexible sampling strategy while the ANOVA decomposition allows for a better domain exploration. A comparison of the three methods is given for each application. In addition, the importance of adding flexibility to the control parameters and the choice of the quantity of interest (QoI) are discussed. As a result, global accuracy can be achieved with a reasonable number of samples allowing computationally expensive CFD-UQ analysis.

  3. Fatigue of NiTi SMA-pulley system using Taguchi and ANOVA

    Science.gov (United States)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel-titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA-pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA-pulley actuators and SMA actuators without pulley. Within the sample of SMA-pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA-pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA-pulley configuration. This novel observation will be applicable to many areas of SMA-pulley system applications development.

  4. Constrained statistical inference: sample-size tables for ANOVA and regression.

    Science.gov (United States)

    Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves

    2014-01-01

    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30-50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0).

  5. The Bootstrap and Multiple Comparisons Procedures as Remedy on Doubts about Correctness of ANOVA Results

    Directory of Open Access Journals (Sweden)

    Izabela CHMIEL

    2012-03-01

    Full Text Available Aim: To determine and analyse an alternative methodology for the analysis of a set of Likert responses measured on a common attitudinal scale when the primary focus of interest is on the relative importance of items in the set - with primary application to health-related quality of life (HRQOL measures. HRQOL questionnaires usually generate data that manifest evident departures from fundamental assumptions of Analysis of Variance (ANOVA approach, not only because of their discrete, bounded and skewed distributions, but also due to significant correlation between mean scores and their variances. Material and Methods: Questionnaire survey with SF-36 has been conducted among 142 convalescents after acute pancreatitis. The estimated scores of HRQOL were compared with use of the multiple comparisons procedures under Bonferroni-like adjustment, and with the bootstrap procedures. Results: In the data set studied, with the SF-36 outcome, the use of the multiple comparisons and bootstrap procedures for analysing HRQOL data provides results quite similar to conventional ANOVA and Rasch methods, suggested at frames of Classical Test Theory and Item Response Theory. Conclusions: These results suggest that the multiple comparisons and bootstrap both are valid methods for analysing HRQOL outcome data, in particular at case of doubts with appropriateness of the standard methods. Moreover, from practical point of view, the processes of the multiple comparisons and bootstrap procedures seems to be much easy to interpret by non-statisticians aimed to practise evidence based health care.

  6. Prediction and Control of Cutting Tool Vibration in Cnc Lathe with Anova and Ann

    Directory of Open Access Journals (Sweden)

    S. S. Abuthakeer

    2011-06-01

    Full Text Available Machining is a complex process in which many variables can deleterious the desired results. Among them, cutting tool vibration is the most critical phenomenon which influences dimensional precision of the components machined, functional behavior of the machine tools and life of the cutting tool. In a machining operation, the cutting tool vibrations are mainly influenced by cutting parameters like cutting speed, depth of cut and tool feed rate. In this work, the cutting tool vibrations are controlled using a damping pad made of Neoprene. Experiments were conducted in a CNC lathe where the tool holder is supported with and without damping pad. The cutting tool vibration signals were collected through a data acquisition system supported by LabVIEW software. To increase the buoyancy and reliability of the experiments, a full factorial experimental design was used. Experimental data collected were tested with analysis of variance (ANOVA to understand the influences of the cutting parameters. Empirical models have been developed using analysis of variance (ANOVA. Experimental studies and data analysis have been performed to validate the proposed damping system. Multilayer perceptron neural network model has been constructed with feed forward back-propagation algorithm using the acquired data. On the completion of the experimental test ANN is used to validate the results obtained and also to predict the behavior of the system under any cutting condition within the operating range. The onsite tests show that the proposed system reduces the vibration of cutting tool to a greater extend.

  7. ANOVA-like differential expression (ALDEx) analysis for mixed population RNA-Seq.

    Science.gov (United States)

    Fernandes, Andrew D; Macklaim, Jean M; Linn, Thomas G; Reid, Gregor; Gloor, Gregory B

    2013-01-01

    Experimental variance is a major challenge when dealing with high-throughput sequencing data. This variance has several sources: sampling replication, technical replication, variability within biological conditions, and variability between biological conditions. The high per-sample cost of RNA-Seq often precludes the large number of experiments needed to partition observed variance into these categories as per standard ANOVA models. We show that the partitioning of within-condition to between-condition variation cannot reasonably be ignored, whether in single-organism RNA-Seq or in Meta-RNA-Seq experiments, and further find that commonly-used RNA-Seq analysis tools, as described in the literature, do not enforce the constraint that the sum of relative expression levels must be one, and thus report expression levels that are systematically distorted. These two factors lead to misleading inferences if not properly accommodated. As it is usually only the biological between-condition and within-condition differences that are of interest, we developed ALDEx, an ANOVA-like differential expression procedure, to identify genes with greater between- to within-condition differences. We show that the presence of differential expression and the magnitude of these comparative differences can be reasonably estimated with even very small sample sizes.

  8. ANOVA parameters influence in LCF experimental data and simulation results

    OpenAIRE

    2010-01-01

    The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasti...

  9. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  10. Mathematical modeling with multidisciplinary applications

    CERN Document Server

    Yang, Xin-She

    2013-01-01

    Features mathematical modeling techniques and real-world processes with applications in diverse fields Mathematical Modeling with Multidisciplinary Applications details the interdisciplinary nature of mathematical modeling and numerical algorithms. The book combines a variety of applications from diverse fields to illustrate how the methods can be used to model physical processes, design new products, find solutions to challenging problems, and increase competitiveness in international markets. Written by leading scholars and international experts in the field, the

  11. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities. The applic...

  12. Finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.

  13. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  14. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...... engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found...

  15. Multilevel Models Applications Using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readersto understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. Itis at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®.Examples are drawn from analysis of real-world research data.

  16. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis.

    Science.gov (United States)

    Campbell, Jamie I D; Thompson, Valerie A

    2012-12-01

    MorePower 6.0 is a flexible freeware statistical calculator that computes sample size, effect size, and power statistics for factorial ANOVA designs. It also calculates relational confidence intervals for ANOVA effects based on formulas from Jarmasz and Hollands (Canadian Journal of Experimental Psychology 63:124-138, 2009), as well as Bayesian posterior probabilities for the null and alternative hypotheses based on formulas in Masson (Behavior Research Methods 43:679-690, 2011). The program is unique in affording direct comparison of these three approaches to the interpretation of ANOVA tests. Its high numerical precision and ability to work with complex ANOVA designs could facilitate researchers' attention to issues of statistical power, Bayesian analysis, and the use of confidence intervals for data interpretation. MorePower 6.0 is available at https://wiki.usask.ca/pages/viewpageattachments.action?pageId=420413544 .

  17. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina;

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor...

  18. Global Testing under Sparse Alternatives: ANOVA, Multiple Comparisons and the Higher Criticism

    CERN Document Server

    Arias-Castro, Ery; Plan, Yaniv

    2010-01-01

    Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose the regression vector is of dimension p with S non-zero coefficients with S = p^{1 -alpha}. Under moderate sparsity levels, i.e. alpha 1/2. In such settings, a multiple comparison procedure is often preferred and we establish its optimality when alpha >= 3/4. However, these two very popular methods are suboptimal, and sometimes powerless, under moderately strong sparsity where 1/2 1/2. This optimality property is true for a variety of designs, including the classical (balanced) multi-way designs and more modern `p > n' designs arising in genetics and signal processing. In addition to the standard fixed effects model, we establish similar results for a rando...

  19. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    2011-01-01

    sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...

  20. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  1. Evaluation Theory, Models, and Applications

    Science.gov (United States)

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  2. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  3. 数据统计分析软件SPSS的应用(四)--广义因素方差分析(GLM-General Factorial ANOVA)

    Institute of Scientific and Technical Information of China (English)

    张苏江; 陈庆波

    2003-01-01

    @@ 广义因素分析过程是广义线性模型(General Linear Models)模块的一个子模块,用于分析多个因素(变量)对一个因素(反应变量)的影响,包含了一般的方差分析内容,如完全随机设计资料的方差分析(one-way ANOVA),随机单位组设计资料的方差分析(two-way ANOVA),拉丁方设计资料的方差分析(three-way ANOVA),析因分析(factorial analysis),交叉设计(cross-over design),正交设计(orthogonal design),裂区设计(split-plot design)资料的方差分析,协方差

  4. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...

  5. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    Science.gov (United States)

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface.

  6. Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs

    Science.gov (United States)

    Pierce, Charles A.; Block, Richard A.; Aguinis, Herman

    2004-01-01

    The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…

  7. Quantitative Comparison of Three Standardization Methods Using a One-Way ANOVA for Multiple Mean Comparisons

    Science.gov (United States)

    Barrows, Russell D.

    2007-01-01

    A one-way ANOVA experiment is performed to determine whether or not the three standardization methods are statistically different in determining the concentration of the three paraffin analytes. The laboratory exercise asks students to combine the three methods in a single analytical procedure of their own design to determine the concentration of…

  8. Use of "t"-Test and ANOVA in Career-Technical Education Research

    Science.gov (United States)

    Rojewski, Jay W.; Lee, In Heok; Gemici, Sinan

    2012-01-01

    Use of t-tests and analysis of variance (ANOVA) procedures in published research from three scholarly journals in career and technical education (CTE) during a recent 5-year period was examined. Information on post hoc analyses, reporting of effect size, alpha adjustments to account for multiple tests, power, and examination of assumptions…

  9. Multi-level Clustering of Wear Particles Based on ANOVA-KW Test%基于ANOVA-KW检验的磨粒多级聚类分析

    Institute of Scientific and Technical Information of China (English)

    黄成; 王仲君; 潘岚; 吕植勇

    2010-01-01

    针对46个非正常磨粒样本形态参数数据进行分析,提出并确定多级聚类的思想以及具体实施步骤,然后通过对磨粒形态参数进行ANOVA-KW检验确定对磨粒分类有影响的参数变量,根据对确定的变量进行多级聚类分析,对球型、切削磨粒的识别度达到了95.6%,对疲劳、层状、片状磨粒的识别度达到了82.6%.

  10. Characterization of Near-Infrared Spectral Variance in the Authentication of Skim and Nonfat Dry Milk Powder Collection Using ANOVA-PCA, Pooled-ANOVA, and Partial Least-Squares Regression

    OpenAIRE

    Harnly, James M.; Peter de B. Harrington; Botros, Lucy L.; Jablonski, Joseph; Chang, Claire; Bergana, Marti Mamula; Wehling, Paul; Downey, Gerard; Potts, Alan R.; Moore, Jeffrey C.

    2014-01-01

    Forty-one samples of skim milk powder (SMP) and nonfat dry milk (NFDM) from 8 suppliers, 13 production sites, and 3 processing temperatures were analyzed by NIR diffuse reflectance spectrometry over a period of 3 days. NIR reflectance spectra (1700–2500 nm) were converted to pseudoabsorbance and examined using (a) analysis of variance-principal component analysis (ANOVA-PCA), (b) pooled-ANOVA based on data submatrices, and (c) partial least-squares regression (PLSR) coupled with pooled-ANOVA....

  11. APPLICATION OF TAGUCHI AND ANOVA IN OPTIMIZATION OF PROCESS PARAMETERS OF LAPPING OPERATION FOR CAST IRON

    Directory of Open Access Journals (Sweden)

    P.R. Parate

    2013-06-01

    Full Text Available Lapping appears like a miraculous process, because it can produce surfaces that are perfectly flat, perfectly round, perfectly smooth, perfectly sharp, or perfectly accurate. Under the correct circumstances, it can impart or improve precise geometry (flatness, roundness, etc., improve surface finish, improve surface quality, achieve high dimensional accuracy (length, diameter, etc., improve angular accuracy (worm gears, couplings, etc., improve fit, and above all, sharpen the tools. This paper presents research on calculating the material removal rate for a machining component by the lapping process. The cast iron sample with an outer diameter of 50 mm and an inner diameter of 45 mm was tested on a single plate tabletop lapping machine. Experiments based on design of experiments were conducted by varying lapping load, lapping time, paste concentration, lapping fluid, and by using different types of abrasives. The Taguchi statistical method has been used in this work. Optimum machining parameters for material removal rate are estimated and verified with experimental results and are found to be in good agreement. The confirmation test exhibits high material removal rate by using Al2O3 abrasive particles together with oil as a carrier fluid under the impression of high load. Further material removal rate increases with an increase in lapping load and time.

  12. 基于ANOVA-like方差分解的非线性系统控制性能评估%Control performance assessment based on ANOVA-like variance decomposition for nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    王志国; 刘飞

    2014-01-01

    In practice, many industrial control loops inevitably include nonlinearites, so the estimates of the control performance may not be correct. Firstly, the existence of the minimum variance performance lower bound(MVPLB) for a class of nonlinear systems is analyzed and the relation between the MVPLM and disturbance terms is determined. Then, the model of closed-loop system is identified by using orthogonal least square algorithm. Based on the achieved model, the contribution to the output variance due to the uncertainties in most recent ahead disturbance terms is calculated according to ANOVA-like decomposition formula, so that the control performance of the nonlinear system is obtained. Finally, simulation results show the effectiveness and feasibility of the proposed algorithm.%针对实际控制回路大多包含非线性特征,导致评估结果存在偏差的问题,以一类非线性系统为对象,首先分析其最小方差性能下限的存在性,并推导出其与系统干扰项的关系式;然后用正交最小二乘方法辨识系统闭环模型,进而使用ANOVA-like方差分解公式估计超前干扰项对输出方差的贡献,由此获得非线性系统的控制性能;最后,将所提出的方法与传统方法通过仿真实例进行比较。仿真结果表明,所提出的方法是可行且有效的。

  13. Non-parametric three-way mixed ANOVA with aligned rank tests.

    Science.gov (United States)

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.

  14. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    SABAN OZER; HASAN ZORLU; SELCUK METE

    2016-06-01

    Generally, memoryless polynomial nonlinear model for nonlinear part and finite impulse response (FIR) model or infinite impulse response model for linear part are preferred in Hammerstein models in literature. In this paper, system identification applications of Hammerstein model that is cascade of nonlinear second order volterra and linear FIR model are studied. Recursive least square algorithm is used to identify the proposed Hammerstein model parameters. Furthermore, the results are compared to identify the success of proposed Hammerstein model and different types of models

  15. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The result indicated…

  16. Analysis of Aluminium Nano Composites using Anova in CNC Machining Process

    Directory of Open Access Journals (Sweden)

    Maria Joe Christopher Poonthota Irudaya Raj

    2013-08-01

    Full Text Available The Objective of this work is to reinforce the Aluminum alloy with CNT by Stir Casting Method in different weight percentage of CNT was added to Aluminium separately to make composites and it physical and thermal properties have been investigated using test like tensile, hardness, Micro Structure and XRD. The improvement of mechanical, Physical and thermal properties for both the cases has been compared with pure aluminum. The TAGUCHI – ORTHOGONAL ARRAY experimental technique is used to optimize the machining parameters. The predicted surface roughness was estimated using S/N ratio and compared with actual values. ANOVA analysis is used to find the significant factors affecting the machining process in order to improve the surface characteristics of Al Material.

  17. A Unified ASrchitecture Model of Web Applications

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the increasing popularity,scale and complexity of web applications,design and development of web applications are becoming more and more difficult,However,the current state of their design and development is characterized by anarchy and ad hoc methodologies,One of the causes of this chaotic situation is that different researchers and designers have different understanding of web applications.In this paper,based on an explicit understanding of web applications,we present a unified architecture model of wed applications,the four-view model,which addresses the analysis and design issues of web applications from four perspectives,namely,logical view,data view,navigation view and presentation view,each addrssing a specific set of concerns of web applications,the purpose of the model is to provide a clear picture of web applications to alleviate the chaotic situation and facilitate its analysis,design and implementation.

  18. Geophysical data integration, stochastic simulation and significance analysis of groundwater responses using ANOVA in the Chicot Aquifer system, Louisiana, USA

    Science.gov (United States)

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.

    2008-01-01

    Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.

  19. Model validation, science and application

    NARCIS (Netherlands)

    Builtjes, P.J.H.; Flossmann, A.

    1998-01-01

    Over the last years there is a growing interest to try to establish a proper validation of atmospheric chemistry-transport (ATC) models. Model validation deals with the comparison of model results with experimental data, and in this way adresses both model uncertainty and uncertainty in, and adequac

  20. Functional Analysis: Evaluation of Response Intensities - Tailoring ANOVA for Lists of Expression Subsets

    Directory of Open Access Journals (Sweden)

    De Hertogh Benoît

    2010-10-01

    Full Text Available Abstract Background Microarray data is frequently used to characterize the expression profile of a whole genome and to compare the characteristics of that genome under several conditions. Geneset analysis methods have been described previously to analyze the expression values of several genes related by known biological criteria (metabolic pathway, pathology signature, co-regulation by a common factor, etc. at the same time and the cost of these methods allows for the use of more values to help discover the underlying biological mechanisms. Results As several methods assume different null hypotheses, we propose to reformulate the main question that biologists seek to answer. To determine which genesets are associated with expression values that differ between two experiments, we focused on three ad hoc criteria: expression levels, the direction of individual gene expression changes (up or down regulation, and correlations between genes. We introduce the FAERI methodology, tailored from a two-way ANOVA to examine these criteria. The significance of the results was evaluated according to the self-contained null hypothesis, using label sampling or by inferring the null distribution from normally distributed random data. Evaluations performed on simulated data revealed that FAERI outperforms currently available methods for each type of set tested. We then applied the FAERI method to analyze three real-world datasets on hypoxia response. FAERI was able to detect more genesets than other methodologies, and the genesets selected were coherent with current knowledge of cellular response to hypoxia. Moreover, the genesets selected by FAERI were confirmed when the analysis was repeated on two additional related datasets. Conclusions The expression values of genesets are associated with several biological effects. The underlying mathematical structure of the genesets allows for analysis of data from several genes at the same time. Focusing on expression

  1. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  2. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. (Brookhaven National Lab., Upton, NY (United States)); Vesely, W.E. (Science Applications International Corp., Columbus, OH (United States))

    1991-01-01

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  3. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Columbus, OH (United States)

    1991-12-31

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  4. PEM Fuel Cells - Fundamentals, Modeling and Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  5. Geometric Modeling Application Interface Program

    Science.gov (United States)

    1990-11-01

    Manual IDEF-Extended ( IDEFIX ) Integrated Information Support System (IISS), ICAM Project 6201, Contract F33615-80-C-5155, December 1985. Interim...Differential Geometry of Curves and Surfaces, M. P. de Carmo, Prentice-Hall, Inc., 1976. IDEFIX Readers Reference, D. Appleton Company, December 1985...Modeling. IDEFI -- IDEF Information Modeling. IDEFIX -- IDEF Extended Information Modeling. IDEF2 -- IDEF Dynamics Modeling. IDSS -- Integrated Decision

  6. Value added analysis and its distribution: a study on BOVESPA-listed banks using ANOVA

    Directory of Open Access Journals (Sweden)

    Leonardo José Seixas Pinto

    2013-05-01

    Full Text Available The value added generated by the financial institutions listed on BOVESPA and its distribution in the years between 2007 to 2011 are the subject of this research which shows how banks divided his wealth with the people, government, third parties and shareholders. Through the use of ANOVA test average in the companies that took part in this research concluded that: (a the average value added of foreign banks differs from national banks. (b The remuneration policy of equity foreign banks differs from national banks. (c The policy of distribution of value added to employees of foreign banks Santander and HSBC differs from the other banks. (d Taxes paid to the government have equal means with the exception of Santander. (e Although curious, Banco Itau and Banco do Brazil is equal in all analyzes in the distribution of value added since it is a private and one public. It appears this way a policy unequal distribution of wealth generation and foreign banks compared with the national public and private banks.

  7. Identification of bacteriophage virion proteins by the ANOVA feature selection and analysis.

    Science.gov (United States)

    Ding, Hui; Feng, Peng-Mian; Chen, Wei; Lin, Hao

    2014-08-01

    The bacteriophage virion proteins play extremely important roles in the fate of host bacterial cells. Accurate identification of bacteriophage virion proteins is very important for understanding their functions and clarifying the lysis mechanism of bacterial cells. In this study, a new sequence-based method was developed to identify phage virion proteins. In the new method, the protein sequences were initially formulated by the g-gap dipeptide compositions. Subsequently, the analysis of variance (ANOVA) with incremental feature selection (IFS) was used to search for the optimal feature set. It was observed that, in jackknife cross-validation, the optimal feature set including 160 optimized features can produce the maximum accuracy of 85.02%. By performing feature analysis, we found that the correlation between two amino acids with one gap was more important than other correlations for phage virion protein prediction and that some of the 1-gap dipeptides were important and mainly contributed to the virion protein prediction. This analysis will provide novel insights into the function of phage virion proteins. On the basis of the proposed method, an online web-server, PVPred, was established and can be freely accessed from the website (http://lin.uestc.edu.cn/server/PVPred). We believe that the PVPred will become a powerful tool to study phage virion proteins and to guide the related experimental validations.

  8. Monte Carlo evaluation of the ANOVA's F and Kruskal-Wallis tests under binomial distribution

    Directory of Open Access Journals (Sweden)

    Eric Batista Ferreira

    2012-12-01

    Full Text Available To verify the equality of more than two levels of a factor under interest in experiments conducted under a completely randomized design (CRD it is common to use the F ANOVA test, which is considered the most powerful test for this purpose. However, the reliability of such results depends on the following assumptions: additivity of effects, independence, homoscedasticity and normality of the errors. The nonparametric Kruskal-Wallis test requires more moderate assumptions and therefore it is an alternative when the assumptions required by the F test are not met. However, the stronger the assumptions of a test, the better its performance. When the fundamental assumptions are met the F test is the best option. In this work, the normality of the errors is violated. Binomial response variables are simulated in order to compare the performances of the F and Kruskal-Wallis tests when one of the analysis of variance assumptions is not satisfied. Through Monte Carlo simulation, were simulated $3,150,000$ experiments to evaluate the type I error rate and power rate of the tests. In most situations, the power of the F test was superior to the Kruskal-Wallis and yet, the F test controlled the Type I error rates.

  9. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  10. Photocell modelling for thermophotovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Mayor, J.-C.; Durisch, W.; Grob, B.; Panitz, J.-C. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Goal of the modelling described here is the extrapolation of the performance characteristics of solar photocells to TPV working conditions. The model accounts for higher flux of radiation and for the higher temperatures reached in TPV converters. (author) 4 figs., 1 tab., 2 refs.

  11. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  12. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  13. Dynamic programming models and applications

    CERN Document Server

    Denardo, Eric V

    2003-01-01

    Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, more. 1982 edition.

  14. Contact modeling for robotics applications

    Energy Technology Data Exchange (ETDEWEB)

    Lafarge, R.A.; Lewis, C.

    1998-08-01

    At Sandia National Laboratories (SNL), the authors are developing the ability to accurately predict motions for arbitrary numbers of bodies of arbitrary shapes experiencing multiple applied forces and intermittent contacts. In particular, the authors are concerned with the simulation of systems such as part feeders or mobile robots operating in realistic environments. Preliminary investigation of commercial dynamics software packages led them to the conclusion that they could use commercial software to provide everything they needed except for the contact model. They found that ADAMS best fit their needs for a simulation package. To simulate intermittent contacts, they need collision detection software that can efficiently compute the distances between non-convex objects and return the associated witness features. They also require a computationally efficient contact model for rapid simulation of impact, sustained contact under load, and transition to and from contact conditions. This paper provides a technical review of a custom hierarchical distance computation engine developed at Sandia, called the C-Space Toolkit (CSTk). In addition, they describe an efficient contact model using a non-linear damping term developed by SNL and Ohio State. Both the CSTk and the non-linear damper have been incorporated in a simplified two-body testbed code, which is used to investigate how to correctly model the contact using these two utilities. They have incorporated this model into the ADAMS software using the callable function interface. An example that illustrates the capabilities of the 9.02 release of ADAMS with their extensions is provided.

  15. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  16. Distance Education Instructional Model Applications.

    Science.gov (United States)

    Jackman, Diane H.; Swan, Michael K.

    1995-01-01

    A survey of graduate students involved in distance education on North Dakota State University's Interactive Video Network included 80 on campus and 13 off. The instructional models rated most effective were role playing, simulation, jurisprudential (Socratic method), memorization, synectics, and inquiry. Direct instruction was rated least…

  17. Association models for petroleum applications

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios

    2013-01-01

    industry, thus thermodynamic data (phase behaviour, densities, speed of sound, etc) are needed to study a very diverse range of compounds in addition to the petroleum ones (CO2, H2S, water, alcohols, glycols, mercaptans, mercury, asphaltenes, waxes, polymers, electrolytes, biofuels, etc) within a very.......g., for gas hydrate related systems, CO2/H2S mixtures, water/hydrocarbons and others. This review highlights both the major advantages of these association models and some of their limitations, which we believe should be discussed in the future....

  18. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  19. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  20. Degenerate RFID Channel Modeling for Positioning Applications

    Directory of Open Access Journals (Sweden)

    A. Povalac

    2012-12-01

    Full Text Available This paper introduces the theory of channel modeling for positioning applications in UHF RFID. It explains basic parameters for channel characterization from both the narrowband and wideband point of view. More details are given about ranging and direction finding. Finally, several positioning scenarios are analyzed with developed channel models. All the described models use a degenerate channel, i.e. combined signal propagation from the transmitter to the tag and from the tag to the receiver.

  1. Application of the Pareto Principle in Rapid Application Development Model

    Directory of Open Access Journals (Sweden)

    Vishal Pandey

    2013-06-01

    Full Text Available the Pareto principle or most popularly termed as the 80/20 rule is one of the well-known theories in the field of economics. This rule of thumb was named after the great economist Vilferdo Pareto. The Pareto principle was proposed by a renowned management consultant Joseph M Juran. The rule states that 80% of the required work can be completed in 20% of the time allotted. The idea is to apply this rule of thumb in the Rapid Application Development (RAD Process model of software engineering. The Rapid application development model integrates end-user in the development using iterative prototyping emphasizing on delivering a series of fully functional prototype to designated user experts. During the application of Pareto Principle the other concepts like the Pareto indifference curve and Pareto efficiency also come into the picture. This enables the development team to invest major amount of time focusing on the major functionalities of the project as per the requirement prioritizationof the customer. The paper involves an extensive study on different unsatisfactory projects in terms of time and financial resources and the reasons of failures are analyzed. Based on the possible reasons offailure, a customized RAD model is proposed integrating the 80/20 rule and advanced software development strategies to develop and deploy excellent quality software product in minimum time duration. The proposed methodology is such that its application will directly affect the quality of the end product for the better.

  2. Advanced Applications for Underwater Acoustic Modeling

    Directory of Open Access Journals (Sweden)

    Paul C. Etter

    2012-01-01

    Full Text Available Changes in the ocean soundscape have been driven by anthropogenic activity (e.g., naval-sonar systems, seismic-exploration activity, maritime shipping and windfarm development and by natural factors (e.g., climate change and ocean acidification. New regulatory initiatives have placed additional restrictions on uses of sound in the ocean: mitigation of marine-mammal endangerment is now an integral consideration in acoustic-system design and operation. Modeling tools traditionally used in underwater acoustics have undergone a necessary transformation to respond to the rapidly changing requirements imposed by this new soundscape. Advanced modeling techniques now include forward and inverse applications, integrated-modeling approaches, nonintrusive measurements, and novel processing methods. A 32-year baseline inventory of modeling techniques has been updated to reflect these new developments including the basic mathematics and references to the key literature. Charts have been provided to guide soundscape practitioners to the most efficient modeling techniques for any given application.

  3. A model for assessment of telemedicine applications

    DEFF Research Database (Denmark)

    Kidholm, Kristian; Ekeland, Anne Granstrøm; Jensen, Lise Kvistgaard;

    2012-01-01

    the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study.......Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...

  4. Models of organometallic complexes for optoelectronic applications

    CERN Document Server

    Jacko, A C; Powell, B J

    2010-01-01

    Organometallic complexes have potential applications as the optically active components of organic light emitting diodes (OLEDs) and organic photovoltaics (OPV). Development of more effective complexes may be aided by understanding their excited state properties. Here we discuss two key theoretical approaches to investigate these complexes: first principles atomistic models and effective Hamiltonian models. We review applications of these methods, such as, determining the nature of the emitting state, predicting the fraction of injected charges that form triplet excitations, and explaining the sensitivity of device performance to small changes in the molecular structure of the organometallic complexes.

  5. Application Note: Power Grid Modeling With Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  6. Application of SIR epidemiological model: new trends

    CERN Document Server

    Rodrigues, Helena Sofia

    2016-01-01

    The simplest epidemiologic model composed by mutually exclusive compartments SIR (susceptible-infected-susceptible) is presented to describe a reality. From health concerns to situations related with marketing, informatics or even sociology, several are the fields that are using this epidemiological model as a first approach to better understand a situation. In this paper, the basic transmission model is analyzed, as well as simple tools that allows us to extract a great deal of information about possible solutions. A set of applications - traditional and new ones - is described to show the importance of this model.

  7. Advances and applications of occupancy models

    Science.gov (United States)

    Bailey, Larissa; MacKenzie, Darry I.; Nichols, James D.

    2013-01-01

    Summary: The past decade has seen an explosion in the development and application of models aimed at estimating species occurrence and occupancy dynamics while accounting for possible non-detection or species misidentification. We discuss some recent occupancy estimation methods and the biological systems that motivated their development. Collectively, these models offer tremendous flexibility, but simultaneously place added demands on the investigator. Unlike many mark–recapture scenarios, investigators utilizing occupancy models have the ability, and responsibility, to define their sample units (i.e. sites), replicate sampling occasions, time period over which species occurrence is assumed to be static and even the criteria that constitute ‘detection’ of a target species. Subsequent biological inference and interpretation of model parameters depend on these definitions and the ability to meet model assumptions. We demonstrate the relevance of these definitions by highlighting applications from a single biological system (an amphibian–pathogen system) and discuss situations where the use of occupancy models has been criticized. Finally, we use these applications to suggest future research and model development.

  8. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  9. Ionospheric Modeling for Precise GNSS Applications

    NARCIS (Netherlands)

    Memarzadeh, Y.

    2009-01-01

    The main objective of this thesis is to develop a procedure for modeling and predicting ionospheric Total Electron Content (TEC) for high precision differential GNSS applications. As the ionosphere is a highly dynamic medium, we believe that to have a reliable procedure it is necessary to transfer t

  10. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  11. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns

    OpenAIRE

    Mohammad Manir Hossain Mollah; Rahman Jamal; Norfilza Mohd Mokhtar; Roslan Harun; Md. Nurul Haque Mollah

    2015-01-01

    Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt ...

  12. Incomplete quality of life data in lung transplant research: comparing cross sectional, repeated measures ANOVA, and multi-level analysis

    Directory of Open Access Journals (Sweden)

    van der Bij Wim

    2005-09-01

    Full Text Available Abstract Background In longitudinal studies on Health Related Quality of Life (HRQL it frequently occurs that patients have one or more missing forms, which may cause bias, and reduce the sample size. Aims of the present study were to address the problem of missing data in the field of lung transplantation (LgTX and HRQL, to compare results obtained with different methods of analysis, and to show the value of each type of statistical method used to summarize data. Methods Results from cross-sectional analysis, repeated measures on complete cases (ANOVA, and a multi-level analysis were compared. The scores on the dimension 'energy' of the Nottingham Health Profile (NHP after transplantation were used to illustrate the differences between methods. Results Compared to repeated measures ANOVA, the cross-sectional and multi-level analysis included more patients, and allowed for a longer period of follow-up. In contrast to the cross sectional analyses, in the complete case analysis, and the multi-level analysis, the correlation between different time points was taken into account. Patterns over time of the three methods were comparable. In general, results from repeated measures ANOVA showed the most favorable energy scores, and results from the multi-level analysis the least favorable. Due to the separate subgroups per time point in the cross-sectional analysis, and the relatively small number of patients in the repeated measures ANOVA, inclusion of predictors was only possible in the multi-level analysis. Conclusion Results obtained with the various methods of analysis differed, indicating some reduction of bias took place. Multi-level analysis is a useful approach to study changes over time in a data set where missing data, to reduce bias, make efficient use of available data, and to include predictors, in studies concerning the effects of LgTX on HRQL.

  13. Application of RBAC Model in System Kernel

    Directory of Open Access Journals (Sweden)

    Guan Keqing

    2012-11-01

    Full Text Available In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We built an access control model of system’s process, and proposed an implement framework. And the methods of implementation of the model for operation systems were illustrated.

  14. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  15. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  16. GSTARS computer models and their applications, Part Ⅱ:Applications

    Institute of Scientific and Technical Information of China (English)

    Francisco J.M.SIM(O)ES; Chih Ted YANG

    2008-01-01

    In part 1 of this two-paper series,a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented.Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems.Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS.Some of the more recent capabilities implemented in GSTARS3,one of the latest versions of the series,are also discussed here with more detail.

  17. Cosmological applications of the Szekeres model

    CERN Document Server

    Bolejko, K

    2006-01-01

    This paper presents the cosmological applications of the quasispherical Szekeres model. The quasispherical Szekeres model is an exact solution of the Einstein field equations, which represents a time-dependent mass dipole superposed on a monopole and therefore is suitable for modelling double structures such as voids and adjourning galaxy superclusters. Moreover, as the Szekeres model is an exact solution of the Einstein equations it enables tracing light and estimation of the impact of cosmic structures on light propagation. This paper presents the evolution of a void and adjourning supercluster and also reports on how the Szekeres model might be employed either for the estimation of mass of galaxies clusters or for the estimation of the luminosity distance.

  18. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  19. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, J.-C.; Fløjgaard, Camilla; A. Marske, Katharine;

    2011-01-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i...... of applications of SDM to paleobiology, outlining the methodology, reviewing SDM-based studies to paleobiology or at the interface of paleo- and neobiology, discussing assumptions and uncertainties as well as how to handle them, and providing a synthesis and outlook. Key methodological issues for SDM applications...

  20. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  1. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  2. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  3. Statistical analysis of relative labeled mass spectrometry data from complex samples using ANOVA

    Science.gov (United States)

    Oberg, Ann L.; Mahoney, Douglas W.; Eckel-Passow, Jeanette E.; Malone, Christopher J.; Wolfinger, Russell D.; Hill, Elizabeth G.; Cooper, Leslie T.; Onuma, Oyere K.; Spiro, Craig; Therneau, Terry M.; Bergen, H. Robert

    2008-01-01

    Statistical tools enable unified analysis of data from multiple global proteomic experiments, producing unbiased estimates of normalization terms despite the missing data problem inherent in these studies. The modeling approach, implementation and useful visualization tools are demonstrated via case study of complex biological samples assessed using the iTRAQ™ relative labeling protocol. PMID:18173221

  4. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  5. Application of the Pareto Principle in Rapid Application Development Model

    OpenAIRE

    Vishal Pandey; AvinashBairwa; Sweta Bhattacharya

    2013-01-01

    the Pareto principle or most popularly termed as the 80/20 rule is one of the well-known theories in the field of economics. This rule of thumb was named after the great economist Vilferdo Pareto. The Pareto principle was proposed by a renowned management consultant Joseph M Juran. The rule states that 80% of the required work can be completed in 20% of the time allotted. The idea is to apply this rule of thumb in the Rapid Application Development (RAD) Process model of software engineering. ...

  6. A light knowledge model for linguistic applications.

    Science.gov (United States)

    Baud, R H; Lovis, C; Ruch, P; Rassinoux, A M

    2001-01-01

    Content extraction from medical texts is achievable today by linguistic applications, in so far as sufficient domain knowledge is available. Such knowledge represents a model of the domain and is hard to collect with sufficient depth and good coverage, despite numerous attempts. To leverage this task is a priority in order to benefit from the awaited linguistic tools. The light model is designed with this goal in mind. Syntactic and lexical information are generally available with large lexicons. A domain model should add the necessary semantic information. The authors have designed a light knowledge model for the collection of semantic information on the basis of the recognized syntactical and lexical attributes. It has been tailored for the acquisition of enough semantic information in order to retrieve terms of a controlled vocabulary from free texts, as for example, to retrieve Mesh terms from patient records.

  7. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  8. Systems Evaluation Methods, Models, and Applications

    CERN Document Server

    Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing

    2011-01-01

    A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique

  9. Managing Event Information Modeling, Retrieval, and Applications

    CERN Document Server

    Gupta, Amarnath

    2011-01-01

    With the proliferation of citizen reporting, smart mobile devices, and social media, an increasing number of people are beginning to generate information about events they observe and participate in. A significant fraction of this information contains multimedia data to share the experience with their audience. A systematic information modeling and management framework is necessary to capture this widely heterogeneous, schemaless, potentially humongous information produced by many different people. This book is an attempt to examine the modeling, storage, querying, and applications of such an

  10. PERFORMANCE MEASUREMENT IN A PUBLIC SECTOR PASSENGER BUS TRANSPORT COMPANY USING FUZZY TOPSIS, FUZZY AHP AND ANOVA – A CASE STUDY

    Directory of Open Access Journals (Sweden)

    M.VETRIVEL SEZHIAN,

    2011-02-01

    Full Text Available This paper aims to assess the performance of three depots of a public sector bus passenger transport company. The performance data have been collected from the real users. The feedback obtained from thedepot managers are predominantly quantitative whereas the feedback information obtained from the regular passengers are of purely qualitative basis. These quantitative and qualitative data has beenanalyzed with multi-criteria decision making model. The Technique for Order Preference by Similarity to Ideal Solution method for decision making problems with Fuzzy data (FTOPSIS and Fuzzy AnalyticalHierarchy Process (FAHP has been used for the managers’ feedback and the One-way Analysis of Variance (ANOVA has been used for the passengers’ information. The values obtained have been combined to obtain the final results. The overall systematic algorithm for determining the best performing depot has been illustrated in step by step basis for continuous improvement.

  11. Teaching renewable energy using online PBL in investigating its effect on behaviour towards energy conservation among Malaysian students: ANOVA repeated measures approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.

  12. Sticker DNA computer model--PartⅡ:Application

    Institute of Scientific and Technical Information of China (English)

    XU Jin; LI Sanping; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore, it arouses attention and interest of scientists in many fields. In this paper, we extend and improve the sticker model, which will be definitely beneficial to the construction of DNA computer. This paper is the second part of our series paper, which mainly focuses on the application of sticker model. It mainly consists of the following three sections: the matrix representation of sticker model is first presented; then a brief review of the past research on graph and combinatorial optimization, such as the minimal set covering problem, the vertex covering problem, Hamiltonian path or cycle problem, the maximal clique problem, the maximal independent problem and the Steiner spanning tree problem, is described; Finally a DNA algorithm for the graph isomorphic problem based on the sticker model is given.

  13. 厚尾相依序列均值多变点 ANOVA 型检验%An ANOVA-type test for multiple change points in the mean of heavy-tailed dependent sequence

    Institute of Scientific and Technical Information of China (English)

    吕会琴; 赵文芝; 赵蕊

    2016-01-01

    In order to study multiple breaks detection of mean in heavy-tailed dependent se-quence,under the null hypothesis of no change against the alternative hypothesis of multiple change points,an ANOVA-type test statistic is proposed.Then the limiting distribution of the test statistic under the null hypothesis and the consistence of the test statistic is obtained re-spectively.Finally,the results of Monte Carlo is shown to support the argument.%为了研究厚尾相依序列均值的多变点检验问题,在厚尾相依随机变量序列原假设无变点与备择假设存在多个变点的假设检验下,提出 ANOVA 型的检验统计量。分别得到在原假设下统计量的极限分布,并对统计量的一致性检验进行推导证明。最后通过数值模拟验证了该方法的有效性。

  14. Comparison of ANOVA, McSweeney, Bradley, Harwell-Serlin, and Blair-Sawilowsky Tests in the Balanced 2x2x2 Layout.

    Science.gov (United States)

    Kelley, D. Lynn; And Others

    The Type I error and power properties of the 2x2x2 analysis of variance (ANOVA) and tests developed by McSweeney (1967), Bradley (1979), Harwell-Serlin (1989; Harwell, 1991), and Blair-Sawilowsky (1990) were compared using Monte Carlo methods. The ANOVA was superior under the Gaussian and uniform distributions. The Blair-Sawilowsky test was…

  15. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    Science.gov (United States)

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  16. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  17. THE CONNECTION IDENTIFICATION BETWEEN THE NET INVESTMENTS IN HOTELS AND RESTORANTS AND TOURISTIC ACCOMODATION CAPACITY BY USING THE ANOVA METHOD

    Directory of Open Access Journals (Sweden)

    Elena STAN

    2009-12-01

    Full Text Available In the purpose of giving the answers to customers’ harsh exigencies, in the Romanian tourism development hasto be taking into account especially the “accommodation” component. The dimension of technical and material base ofaccommodation can be express through: units’ number, rooms’ number, places number. The most used is “placesnumber” indicator. Nowadays as regarding the tourism Romanian investments there are special concerns caused bypeculiar determinations. The study aim is represented by identifying of a connection existence between net investmentsin hotels and restaurants and tourism accommodation capacity, registered among 2002 -2007period in Romania, byusing the dispersion analysis ANOVA method.

  18. Intelligent Model for Traffic Safety Applications

    Directory of Open Access Journals (Sweden)

    C. Chellappan

    2012-01-01

    Full Text Available Problem statement: This study presents an analysis on road traffic system focused on the use of communications to detect dangerous vehicles on roads and highways and how it could be used to enhance driver safety. Approach: The intelligent traffic safety application model is based on all traffic flow theories developed in the last years, leading to reliable representations of road traffic, which is of major importance in achieving the attenuation of traffic problems. The model also includes the decision making process from the driver in accelerating, decelerating and changing lanes. Results: The individuality of each of these processes appears from the model parameters that are randomly generated from statistical distributions introduced as input parameters. Conclusion: This allows the integration of the individuality factor of the population elements yielding knowledge on various driving modes at wide variety of situations.

  19. Recent developments in volatility modeling and applications

    Directory of Open Access Journals (Sweden)

    A. Thavaneswaran

    2006-01-01

    Full Text Available In financial modeling, it has been constantly pointed out that volatility clustering and conditional nonnormality induced leptokurtosis observed in high frequency data. Financial time series data are not adequately modeled by normal distribution, and empirical evidence on the non-normality assumption is well documented in the financial literature (details are illustrated by Engle (1982 and Bollerslev (1986. An ARMA representation has been used by Thavaneswaran et al., in 2005, to derive the kurtosis of the various class of GARCH models such as power GARCH, non-Gaussian GARCH, nonstationary and random coefficient GARCH. Several empirical studies have shown that mixture distributions are more likely to capture heteroskedasticity observed in high frequency data than normal distribution. In this paper, some results on moment properties are generalized to stationary ARMA process with GARCH errors. Application to volatility forecasts and option pricing are also discussed in some detail.

  20. Determining Application Runtimes Using Queueing Network Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Michael L. [Univ. of San Francisco, CA (United States)

    2006-12-14

    Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.

  1. Regional hyperthermia applicator design using FDTD modelling.

    Science.gov (United States)

    Kroeze, H; Van de Kamer, J B; De Leeuw, A A; Lagendijk, J J

    2001-07-01

    Recently published results confirm the positive effect of regional hyperthermia combined with external radiotherapy on pelvic tumours. Several studies have been published on the improvement of RF annular array applicator systems with dipoles and a closed water bolus. This study investigates the performance of a next-generation applicator system for regional hyperthermia with a multi-ring annular array of antennas and an open water bolus. A cavity slot antenna is introduced to enhance the directivity and reduce mutual coupling between the antennas. Several design parameters, i.e. dimensions, number of antennas and operating frequency, have been evaluated using several patient models. Performance indices have been defined to evaluate the effect of parameter variation on the specific absorption rate (SAR) distribution. The performance of the new applicator type is compared with the Coaxial TEM. Operating frequency appears to be the main parameter with a positive influence on the performance. A SAR increase in tumour of 1.7 relative to the Coaxial TEM system can be obtained with a three-ring, six-antenna per ring cavity slot applicator operating at 150 MHz.

  2. Development of a Mobile Application for Building Energy Prediction Using Performance Prediction Model

    Directory of Open Access Journals (Sweden)

    Yu-Ri Kim

    2016-03-01

    Full Text Available Recently, the Korean government has enforced disclosure of building energy performance, so that such information can help owners and prospective buyers to make suitable investment plans. Such a building energy performance policy of the government makes it mandatory for the building owners to obtain engineering audits and thereby evaluate the energy performance levels of their buildings. However, to calculate energy performance levels (i.e., asset rating methodology, a qualified expert needs to have access to at least the full project documentation and/or conduct an on-site inspection of the buildings. Energy performance certification costs a lot of time and money. Moreover, the database of certified buildings is still actually quite small. A need, therefore, is increasing for a simplified and user-friendly energy performance prediction tool for non-specialists. Also, a database which allows building owners and users to compare best practices is required. In this regard, the current study developed a simplified performance prediction model through experimental design, energy simulations and ANOVA (analysis of variance. Furthermore, using the new prediction model, a related mobile application was also developed.

  3. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  4. An Application on Multinomial Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Abdalla M El-Habil

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE This study aims to identify an application of Multinomial Logistic Regression model which is one of the important methods for categorical data analysis. This model deals with one nominal/ordinal response variable that has more than two categories, whether nominal or ordinal variable. This model has been applied in data analysis in many areas, for example health, social, behavioral, and educational.To identify the model by practical way, we used real data on physical violence against children, from a survey of Youth 2003 which was conducted by Palestinian Central Bureau of Statistics (PCBS. Segment of the population of children in the age group (10-14 years for residents in Gaza governorate, size of 66,935 had been selected, and the response variable consisted of four categories. Eighteen of explanatory variables were used for building the primary multinomial logistic regression model. Model had been tested through a set of statistical tests to ensure its appropriateness for the data. Also the model had been tested by selecting randomly of two observations of the data used to predict the position of each observation in any classified group it can be, by knowing the values of the explanatory variables used. We concluded by using the multinomial logistic regression model that we can able to define accurately the relationship between the group of explanatory variables and the response variable, identify the effect of each of the variables, and we can predict the classification of any individual case.

  5. Hydrodynamic Modeling and Its Application in AUC.

    Science.gov (United States)

    Rocco, Mattia; Byron, Olwyn

    2015-01-01

    The hydrodynamic parameters measured in an AUC experiment, s(20,w) and D(t)(20,w)(0), can be used to gain information on the solution structure of (bio)macromolecules and their assemblies. This entails comparing the measured parameters with those that can be computed from usually "dry" structures by "hydrodynamic modeling." In this chapter, we will first briefly put hydrodynamic modeling in perspective and present the basic physics behind it as implemented in the most commonly used methods. The important "hydration" issue is also touched upon, and the distinction between rigid bodies versus those for which flexibility must be considered in the modeling process is then made. The available hydrodynamic modeling/computation programs, HYDROPRO, BEST, SoMo, AtoB, and Zeno, the latter four all implemented within the US-SOMO suite, are described and their performance evaluated. Finally, some literature examples are presented to illustrate the potential applications of hydrodynamics in the expanding field of multiresolution modeling.

  6. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  7. Web Application for Modeling Global Antineutrinos

    CERN Document Server

    Barna, Andrew

    2015-01-01

    Electron antineutrinos stream freely from rapidly decaying fission products within nuclear reactors and from long-lived radioactivity within Earth. Those with energy greater than 1.8 MeV are regularly observed by several kiloton-scale underground detectors. These observations estimate the amount of terrestrial radiogenic heating, monitor the operation of nuclear reactors, and measure the fundamental properties of neutrinos. The analysis of antineutrino observations at operating detectors or the planning of projects with new detectors requires information on the expected signal and background rates. We present a web application for modeling global antineutrino energy spectra and detection rates for any surface location. Antineutrino sources include all registered nuclear reactors as well as the crust and mantle of Earth. Visitors to the website may model the location and power of a hypothetical nuclear reactor, copy energy spectra, and analyze the significance of a selected signal relative to background.

  8. Acceptance of health information technology in health professionals: an application of the revised technology acceptance model.

    Science.gov (United States)

    Ketikidis, Panayiotis; Dimitrovski, Tomislav; Lazuras, Lambros; Bath, Peter A

    2012-06-01

    The response of health professionals to the use of health information technology (HIT) is an important research topic that can partly explain the success or failure of any HIT application. The present study applied a modified version of the revised technology acceptance model (TAM) to assess the relevant beliefs and acceptance of HIT systems in a sample of health professionals (n = 133). Structured anonymous questionnaires were used and a cross-sectional design was employed. The main outcome measure was the intention to use HIT systems. ANOVA was employed to examine differences in TAM-related variables between nurses and medical doctors, and no significant differences were found. Multiple linear regression analysis was used to assess the predictors of HIT usage intentions. The findings showed that perceived ease of use, but not usefulness, relevance and subjective norms directly predicted HIT usage intentions. The present findings suggest that a modification of the original TAM approach is needed to better understand health professionals' support and endorsement of HIT. Perceived ease of use, relevance of HIT to the medical and nursing professions, as well as social influences, should be tapped by information campaigns aiming to enhance support for HIT in healthcare settings.

  9. Stability and adaptability analysis of rice cultivars using environment-centered yield in two-way ANOVA model

    Directory of Open Access Journals (Sweden)

    D. Sumith De. Z. Abeysiriwardena

    2011-12-01

    Full Text Available Identification of rice varieties with wider adaptability and stability are the important aspects in varietal recommendation to achieve better economic benefits for farmers. Multi locational trails are conducted in different locations / seasons to test and identify the consistently performing varieties in wider environments and location specific high performing varieties. The interaction aspect of varieties with environment is complex and highly variable across locations. Thus, the identifying varieties under these circumstances are difficult for varietal recommendations. However, several methods have been proposed in the recent past with the complex computation requirements. But, the aid of statistical software and other programs capabilities ease the complexity to a large extent. In this study, we employed one of the established techniques called variance component analysis (VCA to make the varietal recommendation for wider adaptability for many varying environments and the location specific recommendations. In this method variety × environment interaction is portioned into components for individual varieties using yield deviation approach. The average effect of variety (environment centered yield deviation - Dk and the stability measure of each variety (variety interaction variance -Sk2 are used make the recommendations. The rice yield data of cultivars of three month maturity duration, cultivated across diverse environments during the 2002/03 wet–season in Sri Lanka was analyzed for making recommendations. Based on the results the variety At581 gave the highest D2ksk value with wide adaptability selected for general recommendation. Varieties Bg305 and At303 also had relatively higher Dk and thus these two can also be selected for general cultivation purpose.

  10. Python-Based Applications for Hydrogeological Modeling

    Science.gov (United States)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The

  11. Modeling of polymer networks for application to solid propellant formulating

    Science.gov (United States)

    Marsh, H. E.

    1979-01-01

    Methods for predicting the network structural characteristics formed by the curing of pourable elastomers were presented; as well as the logic which was applied in the development of mathematical models. A universal approach for modeling was developed and verified by comparison with other methods in application to a complex system. Several applications of network models to practical problems are described.

  12. THE SYSTEM OF EVALUATION MODELS UI WEB APPLICATION

    OpenAIRE

    Anna A. Pupykina; Anna E. Satunina

    2015-01-01

    Graph theory provides a means for a formalized description of the model of interaction and ensure the provision of precisemathematical relationships between thecomponents. A scheme for modeling Web applications was proposed. The system of evaluation models UI Web application. Taking into account such factors as understandability, predictability, learning. For each indicator matrix is constructed for assignment to class compliance.

  13. THE SYSTEM OF EVALUATION MODELS UI WEB APPLICATION

    Directory of Open Access Journals (Sweden)

    Anna A. Pupykina

    2015-01-01

    Full Text Available Graph theory provides a means for a formalized description of the model of interaction and ensure the provision of precisemathematical relationships between thecomponents. A scheme for modeling Web applications was proposed. The system of evaluation models UI Web application. Taking into account such factors as understandability, predictability, learning. For each indicator matrix is constructed for assignment to class compliance.

  14. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The…

  15. Seismic Physical Modeling Technology and Its Applications

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduces the seismic physical modeling technology in the CNPC Key Lab of Geophysical Exploration. It includes the seismic physical model positioning system, the data acquisition system, sources, transducers,model materials, model building techniques, precision measurements of model geometry, the basic principles of the seismic physical modeling and experimental methods, and two physical model examples.

  16. Ethyl chitosan synthesis and quantification of the effects acquired after grafting it on a cotton fabric, using ANOVA statistical analysis.

    Science.gov (United States)

    Popescu, Vasilica; Muresan, Augustin; Popescu, Gabriel; Balan, Mihaela; Dobromir, Marius

    2016-03-15

    Three ethyl chitosans (ECSs) have been prepared using the ethyl chloride (AA) that was obtained in situ. Each ECS was applied on a 100% cotton fabric through a pad-dry-cure technology. Using the ANOVA as statistic method, the wrinkle-proofing effects have been determined varying the concentrations of AA (0.1-2.1mmol) and chitosan (CS) (0.1-2.1mmol). Alkylation and grafting mechanisms have been confirmed by the results of FTIR, (1)H NMR, XPS, SEM, DSC and termogravimetric analyses. The performances of each ECS as wrinkle-proofing agent have been revealed through quantitative methods (taking-up degree, wrinkle-recovering angle, tensile strength and effect's durability). The ECSs confer wrinkle-recovering angle and tensile strength higher than those of the witness sample. Durability of ECSs grafted on cotton have been demonstrated by a good capacity of dyeing with non-specific (acid/anionic and cationic) dyes under severe working conditions (100°C, 60min) and a good antimicrobial capacity.

  17. 基于 ANOVA-IPA的旅行社售后服务研究--以H市为例

    Institute of Scientific and Technical Information of China (English)

    赵静; 龚荷

    2013-01-01

    旅行社售后服务对于增强游客让渡价值感知有着非常重要的作用,是培育忠诚顾客的有力保证。在科学认识售后服务的内涵、意义及方式途径的基础上,以问卷调查为途径获得相关数据,从游客视角探析旅行社行业售后服务现状和游客期望。基于信息不对称理论、80/20法则和激励理论,并结合旅行社访谈资料,通过定性和定量分析构建ANOVA-IPA模型,确定优势区、维持区、弱势区和关注区,进而采取有针对性的措施拓展售后服务渠道,促进旅行社行业竞争向良性循环转变。

  18. Optimization of tensile strength of friction welded AISI 1040 and AISI 304L steels according to statistics analysis (ANOVA)

    Energy Technology Data Exchange (ETDEWEB)

    Kirik, Ihsan [Batman Univ. (Turkey); Ozdemir, Niyazi; Firat, Emrah Hanifi; Caligulu, Ugur [Firat Univ., Elazig (Turkey)

    2013-06-01

    Materials difficult to weld by fusion welding processes can be successfully welded by friction welding. The strength of the friction welded joints is extremely affected by process parameters (rotation speed, friction time, friction pressure, forging time, and forging pressure). In this study, statistical values of tensile strength were investigated in terms of rotation speed, friction time, and friction pressure on the strength behaviours of friction welded AISI 1040 and AISI 304L alloys. Then, the tensile test results were analyzed by analysis of variance (ANOVA) with a confidence level of 95 % to find out whether a statistically significant difference occurs. As a result of this study, the maximum tensile strength is very close, which that of AISI 1040 parent metal of 637 MPa to could be obtained for the joints fabricated under the welding conditions of rotation speed of 1700 rpm, friction pressure of 50 MPa, forging pressure of 100 MPa, friction time of 4 s, and forging time of 2 s. Rotation speed, friction time, and friction pressure on the friction welding of AISI 1040 and AISI 304L alloys were statistically significant regarding tensile strength test values. (orig.)

  19. Novel applications of the dispersive optical model

    Science.gov (United States)

    Dickhoff, W. H.; Charity, R. J.; Mahzoon, M. H.

    2017-03-01

    A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (d, p) transfer reaction using the adiabatic distorted wave approximation. We review these calculations which suggest that physically meaningful results are easier to obtain by employing DOM ingredients as compared to the traditional approach which relies on a phenomenologically-adjusted bound-state wave function combined with a global (nondispersive) optical-model potential. Application to the exotic 132Sn nucleus also shows great promise for the extrapolation of DOM potentials towards the drip line with attendant relevance for the physics of FRIB. We note that the DOM method combines structure and reaction information on the same footing providing a unique approach to the analysis of exotic nuclei. We illustrate the importance of abandoning the custom of representing the non-local Hartree–Fock (HF) potential in the DOM by an energy-dependent local potential as it impedes the proper normalization of the solution of the Dyson equation. This important step allows for the interpretation of the DOM potential as representing the nucleon self-energy permitting the calculations of

  20. Novel applications of the dispersive optical model

    CERN Document Server

    Dickhoff, W H; Mahzoon, M H

    2016-01-01

    A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (\\textit{d,p}) transfer reaction using the adiabatic distorted wave approximation (ADWA). We review the fully non-local DOM...

  1. Unsteady aerodynamics modeling for flight dynamics application

    Institute of Scientific and Technical Information of China (English)

    Qing Wang; Kai-Feng He; Wei-Qi Qian; Tian-Jiao Zhang; Yan-Qing Cheng; Kai-Yuan Wu

    2012-01-01

    In view of engineering application,it is practicable to decompose the aerodynamics into three components:the static aerodynamics,the aerodynamic increment due to steady rotations,and the aerodynamic increment due to unsteady separated and vortical flow.The first and the second components can be presented in conventional forms,while the third is described using a one-order differential equation and a radial-basis-function (RBF) network. For an aircraft configuration,the mathematical models of 6-component aerodynamic coefficients are set up from the wind tunnel test data of pitch,yaw,roll,and coupled yawroll large-amplitude oscillations.The flight dynamics of an aircraft is studied by the bifurcation analysis technique in the case of quasi-steady aerodynamics and unsteady aerodynamics,respectively.The results show that:(1) unsteady aerodynamics has no effect upon the existence of trim points,but affects their stability; (2) unsteady aerodynamics has great effects upon the existence,stability,and amplitudes of periodic solutions; and (3) unsteady aerodynamics changes the stable regions of trim points obviously.Furthermore,the dynamic responses of the aircraft to elevator deflections are inspected.It is shown that the unsteady aerodynamics is beneficial to dynamic stability for the present aircraft.Finally,the effects of unsteady aerodynamics on the post-stall maneuverability are analyzed by numerical simulation.

  2. GOCE Exploitation for Moho Modeling and Applications

    Science.gov (United States)

    Sampierto, D.

    2011-07-01

    New ESA missions dedicated to the observation of the Earth from space, like the gravity-gradiometry mission GOCE and the radar altimetry mission CRYOSAT 2, foster research, among other subjects, also on inverse gravimetric problems and on the description of the nature and the geographical location of gravimetric signals. In this framework the GEMMA project (GOCE Exploitation for Moho Modeling and Applications), funded by the European Space Agency and Politecnico di Milano, aims at estimating the boundary between Earth's crust and mantle (the so called Mohorovičić discontinuity or Moho) from GOCE data in key regions of the world. In the project a solution based on a simple two layer model in spherical approximation is proposed. This inversion problem based on the linearization of the Newton's gravitational law around an approximate mean Moho surface will be solved by exploiting Wiener-Kolmogorov theory in the frequency domain where the depth of the Moho discontinuity will be treated as a random signal with a zero mean and its own covariance function. The algorithm can be applied in a numerically efficient way by using the Fast Fourier Transform. As for the gravity observations, we will consider grids of the anomalous gravitational potential and its second radial derivative at satellite altitude. In particular this will require first of all to elaborate GOCE data to obtain a local grid of the gravitational potential field and its second radial derivative and after that to separate the gravimetric signal due to the considered discontinuity from the gravitational effects of other geological structures present into the observations. The first problem can be solved by applying the so called space- wise approach to GOCE observations, while the second one can be achieved by considering a priori models and geophysical information by means of an appropriate Bayesan technique. Moreover other data such as ground gravity anomalies or seismic profiles can be combined, in an

  3. Application of multidimensional item response theory models to longitudinal data

    NARCIS (Netherlands)

    Marvelde, te Janneke M.; Glas, Cees A.W.; Van Landeghem, Georges; Van Damme, Jan

    2006-01-01

    The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model

  4. Model-driven semantic integration of service-oriented applications

    NARCIS (Netherlands)

    Pokraev, Stanislav Vassilev

    2009-01-01

    The integration of enterprise applications is an extremely complex problem since the most applications have not been designed to work with other applications. That is, they have different information models, do not share common state, and do not consult each other when updating their states. Unfortu

  5. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  6. Application of simulation models for the optimization of business processes

    Science.gov (United States)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  7. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  8. Photonic crystal fiber modelling and applications

    DEFF Research Database (Denmark)

    Bjarklev, Anders Overgaard; Broeng, Jes; Libori, Stig E. Barkou;

    2001-01-01

    Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented.......Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented....

  9. Model Checking-Based Testing of Web Applications

    Institute of Scientific and Technical Information of China (English)

    ZENG Hongwei; MIAO Huaikou

    2007-01-01

    A formal model representing the navigation behavior of a Web application as the Kripke structure is proposed and an approach that applies model checking to test case generation is presented. The Object Relation Diagram as the object model is employed to describe the object structure of a Web application design and can be translated into the behavior model. A key problem of model checking-based test generation for a Web application is how to construct a set of trap properties that intend to cause the violations of model checking against the behavior model and output of counterexamples used to construct the test sequences.We give an algorithm that derives trap properties from the object model with respect to node and edge coverage criteria.

  10. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  11. Applications of mathematical models of road cycling

    OpenAIRE

    Dahmen, Thorsten; Saupe, Dietmar; Wolf, Stefan

    2012-01-01

    This contribution discusses several use cases of mathematical models for road cycling. A mechanical model for the pedaling forces is the basis for an accurate indoor ergometer simulation of road cycling on real-world tracks. Together with a simple physiological model for the exertion of the athlete as a function of his/her accumulated power output, an optimal riding strategy for time trials on mountain ascents is computed. A combination of the two models leads to a mathematical optimization p...

  12. Correlated Data Analysis Modeling, Analytics, and Applications

    CERN Document Server

    Song, Peter X-K

    2007-01-01

    Presents developments in correlated data analysis. This book provides a systematic treatment for the topic of estimating functions. In addition to marginal models and mixed-effects models, it covers topics on joint regression analysis based on Gaussian copulas and generalized state space models for longitudinal data from long time series.

  13. The Nomad Model: Theory, Developments and Applications

    NARCIS (Netherlands)

    Campanella, M.; Hoogendoorn, S.P.; Daamen, W.

    2014-01-01

    This paper presents details of the developments of the Nomad model after being introduced more than 12 years ago. The model is derived from a normative theory of pedestrian behavior making it unique under microscopic models. Nomad has been successfully applied in several cases indicating that it ful

  14. Optimizing Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via Taguchi Complemented with ANOVA

    Directory of Open Access Journals (Sweden)

    Tayser Sumer Gaaz

    2016-11-01

    Taguchi and ANOVA approaches. Seemingly, mHNTs has shown its very important role in the resulting product.

  15. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    the individual in almost any thinkable way. This project focuses on measuring the eects on sleep in both humans and animals. The sleep process is usually analyzed by categorizing small time segments into a number of sleep states and this can be modelled using a Markov process. For this purpose new methods...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  16. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  17. Wealth distribution models: analisys and applications

    Directory of Open Access Journals (Sweden)

    Camilo Dagum

    2008-03-01

    Full Text Available After Pareto developed his Type I model in 1895, a large number of income distribution models have been specified. However, the important issue of wealth distribution called the attention of researchers more than sixty years later. It started with the contributions by Wold and Whittle, and Sargan, both published in 1957. The former authors proposed the Pareto Type I model and the latter the lognormal distribution, but they did not empirically validate them. Afterward, other models were proposed: in 1969 the Pareto Types I and II by Stiglitz; in 1975, the loglogistic by Atkinson and the Pearson Type V by Vaughan. In 1990 and 1994, Dagum developed a general model and his type II as models of wealth distribution. They were validated with real life data from the U.S.A., Canada, Italy and the U.K. In 1999, Dagum further developed his general model of net wealth distribution with support (??,? which contains, as particular cases, his Types I and II model of income and wealth distributions. This study presents and analyzes the proposed models of wealth distribution and their properties. The only model with the flexibility, power, and economic and stochastic foundations to accurately fit net and total wealth distributions is the Dagum general model and its particular cases as validated with the case studies of Ireland, U.K., Italy and U.S.A.

  18. Sparse Multivariate Modeling: Priors and Applications

    DEFF Research Database (Denmark)

    Henao, Ricardo

    modeling, a model for peptide-protein/protein-protein interactions called latent protein tree, a framework for sparse Gaussian process classification based on active set selection and a linear multi-category sparse classifier specially targeted to gene expression data. The thesis is organized to provide......This thesis presents a collection of statistical models that attempt to take advantage of every piece of prior knowledge available to provide the models with as much structure as possible. The main motivation for introducing these models is interpretability since in practice we want to be able...... to use them as hypothesis generating tools. All of our models start from a family of structures, for instance factor models, directed acyclic graphs, classifiers, etc. Then we let them be selectively sparse as a way to provide them with structural fl exibility and interpretability. Finally, we complement...

  19. Nonlinear dynamics new directions models and applications

    CERN Document Server

    Ugalde, Edgardo

    2015-01-01

    This book, along with its companion volume, Nonlinear Dynamics New Directions: Theoretical Aspects, covers topics ranging from fractal analysis to very specific applications of the theory of dynamical systems to biology. This second volume contains mostly new applications of the theory of dynamical systems to both engineering and biology. The first volume is devoted to fundamental aspects and includes a number of important new contributions as well as some review articles that emphasize new development prospects. The topics addressed in the two volumes include a rigorous treatment of fluctuations in dynamical systems, topics in fractal analysis, studies of the transient dynamics in biological networks, synchronization in lasers, and control of chaotic systems, among others. This book also: ·         Develops applications of nonlinear dynamics on a diversity of topics such as patterns of synchrony in neuronal networks, laser synchronization, control of chaotic systems, and the study of transient dynam...

  20. Application of arrangement theory to unfolding models

    CERN Document Server

    Kamiya, Hidehiko; Tokushige, Norihide

    2010-01-01

    Arrangement theory plays an essential role in the study of the unfolding model used in many fields. This paper describes how arrangement theory can be usefully employed in solving the problems of counting (i) the number of admissible rankings in an unfolding model and (ii) the number of ranking patterns generated by unfolding models. The paper is mostly expository but also contains some new results such as simple upper and lower bounds for the number of ranking patterns in the unidimensional case.

  1. Application of Actuarial Modelling in Insurance Industry

    OpenAIRE

    Burcã Ana-Maria; Bãtrînca Ghiorghe

    2011-01-01

    In insurance industry, the financial stability of insurance companies represents an issue of vital importance. In order to maintain the financial stability and meet minimum regulatory requirements, actuaries apply actuarial modeling. Modeling has been at the center of actuarial science and of all the sciences from the beginning of their journey. In insurance industry, actuarial modeling creates a framework that allows actuaries to identify, understand, quantify and manage a wide range of risk...

  2. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  3. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  4. The Geometric Modelling of Furniture Parts and Its Application

    Institute of Scientific and Technical Information of China (English)

    张福炎; 蔡士杰; 王玉兰; 居正文

    1989-01-01

    In this paper, a 3-D solid modelling method appropriate for the design of furniture parts, which has been used in FCAD (Computer Aided Design for Furniture Structure )system, is introduced. Some interactive functions for modifying part models and deriving a variety of practical parts are described. Finally. the application of the modelling method to computer aided manufacturing of furniture parts is prospected.

  5. Advanced Applications of Structural Equation Modeling in Counseling Psychology Research

    Science.gov (United States)

    Martens, Matthew P.; Haase, Richard F.

    2006-01-01

    Structural equation modeling (SEM) is a data-analytic technique that allows researchers to test complex theoretical models. Most published applications of SEM involve analyses of cross-sectional recursive (i.e., unidirectional) models, but it is possible for researchers to test more complex designs that involve variables observed at multiple…

  6. A Didactic Experience of Statistical Analysis for the Determination of Glycine in a Nonaqueous Medium Using ANOVA and a Computer Program

    Science.gov (United States)

    Santos-Delgado, M. J.; Larrea-Tarruella, L.

    2004-01-01

    The back-titration methods are compared statistically to establish glycine in a nonaqueous medium of acetic acid. Important variations in the mean values of glycine are observed due to the interaction effects between the analysis of variance (ANOVA) technique and a statistical study through a computer software.

  7. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  8. Nuclear reaction modeling, verification experiments, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  9. Government Contracting Options: A Model and Application.

    Science.gov (United States)

    1996-01-01

    a random number generator based on work by Marsaglia and Zaman (1991) and Feldman (forthcoming). 45 C. APPLICATION PARAMETERS This appendix...Phase II). Air Force Business Research Management Center. BRMC-81-5034. May 1984. Marsaglia , George, and Arif Zaman. "A New Class of Random

  10. Human hand modelling: kinematics, dynamics, applications

    NARCIS (Netherlands)

    Gustus, A.; Stillfried, G.; Visser, J.; Jörntell, H.; Van der Smagt, P.

    2012-01-01

    An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand funct

  11. Training evaluation models: Theory and applications

    OpenAIRE

    Carbone, V.; MORVILLO, A

    2002-01-01

    This chapter has the following aims: 1. Compare the various conceptual models for evaluation, identifying their strengths and weaknesses; 2. Define an evaluation model consistent with the aims and constraints of the fit project; 3. Describe, in critical fashion, operative tools for evaluating training which are reliable, flexible and analytical.

  12. The DES-Model and Its Applications

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular the e...

  13. Asteroid thermal modeling: recent developments and applications

    NARCIS (Netherlands)

    Harris, A. W.; Mueller, M.

    2006-01-01

    A variety of thermal models are used for the derivation of asteroid physical parameters from thermal-infrared observations Simple models based on spherical geometry are often adequate for obtaining sizes and albedos when very little information about an object is available However sophisticated ther

  14. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...

  15. Optical Coherence Tomography: Modeling and Applications

    DEFF Research Database (Denmark)

    Thrane, Lars

    in previous theoretical models of OCT systems. It is demonstrated that the shower curtain effect is of utmost importance in the theoretical description of an OCT system. The analytical model, together with proper noise analysis of the OCT system, enables calculation of the SNR, where the optical properties...... geometry, i.e., reflection geometry, is developed. As in the new OCT model, multiple scattered photons has been taken into account together with multiple scattering effects. As an important result, a novel method of creating images based on measurements of the momentum width of the Wigner phase......An analytical model is presented that is able to describe the performance of OCT systems in both the single and multiple scattering regimes simultaneously. This model inherently includes the shower curtain effect, well-known for light propagation through the atmosphere. This effect has been omitted...

  16. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  17. Stochastic properties of generalised Yule models, with biodiversity applications.

    Science.gov (United States)

    Gernhard, Tanja; Hartmann, Klaas; Steel, Mike

    2008-11-01

    The Yule model is a widely used speciation model in evolutionary biology. Despite its simplicity many aspects of the Yule model have not been explored mathematically. In this paper, we formalise two analytic approaches for obtaining probability densities of individual branch lengths of phylogenetic trees generated by the Yule model. These methods are flexible and permit various aspects of the trees produced by Yule models to be investigated. One of our methods is applicable to a broader class of evolutionary processes, namely the Bellman-Harris models. Our methods have many practical applications including biodiversity and conservation related problems. In this setting the methods can be used to characterise the expected rate of biodiversity loss for Yule trees, as well as the expected gain of including the phylogeny in conservation management. We briefly explore these applications.

  18. Multilevel Models: Conceptual Framework and Applicability

    Directory of Open Access Journals (Sweden)

    Roxana-Otilia-Sonia Hrițcu

    2015-10-01

    Full Text Available Individuals and the social or organizational groups they belong to can be viewed as a hierarchical system situated on different levels. Individuals are situated on the first level of the hierarchy and they are nested together on the higher levels. Individuals interact with the social groups they belong to and are influenced by these groups. Traditional methods that study the relationships between data, like simple regression, do not take into account the hierarchical structure of the data and the effects of a group membership and, hence, results may be invalidated. Unlike standard regression modelling, the multilevel approach takes into account the individuals as well as the groups to which they belong. To take advantage of the multilevel analysis it is important that we recognize the multilevel characteristics of the data. In this article we introduce the outlines of multilevel data and we describe the models that work with such data. We introduce the basic multilevel model, the two-level model: students can be nested into classes, individuals into countries and the general two-level model can be extended very easily to several levels. Multilevel analysis has begun to be extensively used in many research areas. We present the most frequent study areas where multilevel models are used, such as sociological studies, education, psychological research, health studies, demography, epidemiology, biology, environmental studies and entrepreneurship. We support the idea that since hierarchies exist everywhere, multilevel data should be recognized and analyzed properly by using multilevel modelling.

  19. Researching the effect of the practical applications performed with cadaver dissection and anatomical models on anatomy education

    Directory of Open Access Journals (Sweden)

    Vedat Sabancıoğulları

    2016-12-01

    Full Text Available Objective: The most important element in providing good quality of healthcare is well training of healthcare staff, particularly doctors. The way of being a good and successful healthcare staff is to learn the human anatomy accurately and permanently. Practical applications as well as theoretical courses are also highly important in learning human anatomy. In classical anatomy education, using cadaver is accepted to be indispensable. However, along with the new medical faculties, there has been an increase in the shortage of cadaver, and anatomy practices have mainly begun to be carried out on models or mock-ups. Therefore, we aimed to study with anatomic models in practical applications and to research the effect of cadaver dissection on learning anatomy. Method: In this study including 120-second grade students that their achievement levels are close together and participating in the theoretical courses of anatomy in medical faculty during 2015 – 2016 academic year. To realize this, students were divided into four groups (1st group were include students only listening to theoretical course, 2nd group theoretical course and performing application with anatomic models, 3rd group theoretical course and performing application with cadaver dissection, and 4th group theoretical course and performing application with anatomic models and cadaver dissection of 30 people. Degree of students’ understanding the subject was detected with written and practical exams consists of 10 questions after the theoretical and practical courses on each committee. The resulting data are loaded to SPSS 22.0 software and statistical evaluation One-way ANOVA, Mann-Whitney, chi-square test was used. Results: The average of students who practice on both models and cadavers were statistically significant (p 0.05. Conclusions: The results obtained from our study indicate that anatomy practical training is carried out with the dissection of cadavers that made it easier to

  20. Modelling of Tape Casting for Ceramic Applications

    DEFF Research Database (Denmark)

    Jabbari, Masoud

    Functional ceramics find use in many different applications of great interest, e.g. thermal barrier coatings, piezoactuators, capacitors, solid oxide fuel cells and electrolysis cells, membranes, and filters. It is often the case that the performance of a ceramic component can be increased markedly...... if it is possible to vary the relevant properties (e.g. electrical, electrochemical, or magnetic) in a controlled manner along the extent of the component. Such composites in which ceramic layers of different composition and/or microstructure are combined provide a new and intriguing dimension to the field...... of functional ceramics research. Advances in ceramic forming have enabled low cost shaping techniques such as tape casting and extrusion to be used in some of the most challenging technologies. These advances allow the design of complex components adapted to desired specific properties and applications. However...

  1. Recent Applications of Hidden Markov Models in Computational Biology

    Institute of Scientific and Technical Information of China (English)

    Khar Heng Choo; Joo Chuan Tong; Louxin Zhang

    2004-01-01

    This paper examines recent developments and applications of Hidden Markov Models (HMMs) to various problems in computational biology, including multiple sequence alignment, homology detection, protein sequences classification, and genomic annotation.

  2. Advances in Application of Models in Soil Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    SI Zhi-guo; WANG Ji-jie; YU Yuan-chun; LIANG Guan-feng; CHEN Chang-ren; SHU Hong-lan

    2012-01-01

    Soil quality is a comprehensive reflection of soil properties.Since the soil quality concept was put forward in the 1970s,the quality of different type soils in different regions have been evaluated through a variety of evaluation methods,but it still lacks universal soil quantity evaluation models and methods.In this paper,the applications and prospects of grey relevancy comprehensive evaluation model,attribute hierarchical model,fuzzy comprehensive evaluation model,matter-element model,RAGA-based PPC /PPE model and GIS model in soil quality evaluation are reviewed.

  3. Mathematical modeling and applications in nonlinear dynamics

    CERN Document Server

    Merdan, Hüseyin

    2016-01-01

    The book covers nonlinear physical problems and mathematical modeling, including molecular biology, genetics, neurosciences, artificial intelligence with classical problems in mechanics and astronomy and physics. The chapters present nonlinear mathematical modeling in life science and physics through nonlinear differential equations, nonlinear discrete equations and hybrid equations. Such modeling can be effectively applied to the wide spectrum of nonlinear physical problems, including the KAM (Kolmogorov-Arnold-Moser (KAM)) theory, singular differential equations, impulsive dichotomous linear systems, analytical bifurcation trees of periodic motions, and almost or pseudo- almost periodic solutions in nonlinear dynamical systems. Provides methods for mathematical models with switching, thresholds, and impulses, each of particular importance for discontinuous processes Includes qualitative analysis of behaviors on Tumor-Immune Systems and methods of analysis for DNA, neural networks and epidemiology Introduces...

  4. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  5. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  6. Development and application of earth system models

    OpenAIRE

    Prinn, Ronald G.

    2012-01-01

    The global environment is a complex and dynamic system. Earth system modeling is needed to help understand changes in interacting subsystems, elucidate the influence of human activities, and explore possible future changes. Integrated assessment of environment and human development is arguably the most difficult and most important “systems” problem faced. To illustrate this approach, we present results from the integrated global system model (IGSM), which consists of coupled submodels address...

  7. Application of Chebyshev Polynomial to simulated modeling

    Institute of Scientific and Technical Information of China (English)

    CHI Hai-hong; LI Dian-pu

    2006-01-01

    Chebyshev polynomial is widely used in many fields, and used usually as function approximation in numerical calculation. In this paper, Chebyshev polynomial expression of the propeller properties across four quadrants is given at first, then the expression of Chebyshev polynomial is transformed to ordinary polynomial for the need of simulation of propeller dynamics. On the basis of it,the dynamical models of propeller across four quadrants are given. The simulation results show the efficiency of mathematical model.

  8. Application of DARLAM to Regional Haze Modeling

    Science.gov (United States)

    Koe, L. C. C.; Arellano, A. F., Jr.; McGregor, J. L.

    - The CSIRO Division of Atmospheric Research limited area model (DARLAM) is applied to atmospheric transport modeling of haze in southeast Asia. The 1998 haze episode is simulated using an emission inventory derived from hotspot information and adopting removal processes based on SO2.Results show that the model is able to simulate the transport of haze in the region. The model images closely resemble the plumes of NASA Total Ozone Mapping Spectrometer and Meteorological Service Singapore haze maps. Despite the limitation of input data, particularly for haze emissions, the three-month average pattern correlation obtained for the whole episode is 0.61. The model has also been able to reproduce the general features of transboundary air pollution over a long period of time. Predicted total particulate matter concentration also agrees reasonably well with observation.The difference in the model results from the satellite images may be attributed to the large uncertainties of emission, simplification of haze deposition and transformation mechanisms and the relatively coarse horizontal and vertical resolution adopted for this particular simulation.

  9. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  10. Network models in optimization and their applications in practice

    CERN Document Server

    Glover, Fred; Phillips, Nancy V

    2011-01-01

    Unique in that it focuses on formulation and case studies rather than solutions procedures covering applications for pure, generalized and integer networks, equivalent formulations plus successful techniques of network models. Every chapter contains a simple model which is expanded to handle more complicated developments, a synopsis of existing applications, one or more case studies, at least 20 exercises and invaluable references. An Instructor's Manual presenting detailed solutions to all the problems in the book is available upon request from the Wiley editorial department.

  11. The Application Model of Moving Objects in Cargo Delivery System

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng-li; ZHOU Ming-tian; XU Bo

    2004-01-01

    The development of spatio-temporal database systems is primarily motivated by applications which track and present mobile objects. In this paper, solutions for establishing the moving object database based on GPS/GIS environment are presented, and a data modeling of moving object is given by using Temporal logical to extent the query language, finally the application model in cargo delivery system is shown.

  12. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  13. Recognizing textual entailment models and applications

    CERN Document Server

    Dagan, Ido; Sammons, Mark

    2013-01-01

    In the last few years, a number of NLP researchers have developed and participated in the task of Recognizing Textual Entailment (RTE). This task encapsulates Natural Language Understanding capabilities within a very simple interface: recognizing when the meaning of a text snippet is contained in the meaning of a second piece of text. This simple abstraction of an exceedingly complex problem has broad appeal partly because it can be conceived also as a component in other NLP applications, from Machine Translation to Semantic Search to Information Extraction. It also avoids commitment to any sp

  14. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  15. A generalized sinusoidal model and its applications

    Institute of Scientific and Technical Information of China (English)

    KU Shao-ping; LI Ning

    2009-01-01

    A physical model of sinusoidal function was established. It is generalized that the force is directly proportional to a power function of the distance in a classical spring-oscillator system. The differential equation of the generalized model was given. Simulations were conducted with different power values. The results show that the solution of the generalized equation is a periodic function. The expressions of the amplitude and the period (frequency) of the generalized equation were derived by the physical method. All the simulation results coincide with the calculation results of the derived expressions. A special function also was deduced and proven to be convergent in the theoretical analysis. The limit value of the special function also was derived. The generalized model can be used in solving a type of differential equation and to generate periodic waveforms.

  16. Novel grey forecast model and its application

    Institute of Scientific and Technical Information of China (English)

    丁洪发; 舒双焰; 段献忠

    2003-01-01

    The advancement of grey system theory provides an effective analytic tool for power system load fore-cast. All kinds of presently available grey forecast models can be well used to deal with the short-term load fore-cast. However, they make big errors for medium or long-term load forecasts, and the load that does not satisfythe approximate exponential increasing law in particular. A novel grey forecast model that is capable of distin-guishing the increasing law of load is adopted to forecast electric power consumption (EPC) of Shanghai. Theresults show that this model can be used to greatly improve the forecast precision of EPC for a secondary industryor the whole society.

  17. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  18. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  19. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  20. The applicability of the wind compression model

    CERN Document Server

    Cariková, Zuzana

    2014-01-01

    Compression of the stellar winds from rapidly rotating hot stars is described by the wind compression model. However, it was also shown that rapid rotation leads to rotational distortion of the stellar surface, resulting in the appearance of non-radial forces acting against the wind compression. In this note we justify the wind compression model for moderately rotating white dwarfs and slowly rotating giants. The former could be conducive to understanding density/ionization structure of the mass outflow from symbiotic stars and novae, while the latter can represent an effective mass-transfer mode in the wide interacting binaries.

  1. Model castings with composite surface layer - application

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2008-10-01

    Full Text Available The paper presents a method of usable properties of surface layers improvement of cast carbon steel 200–450, by put directly in foundingprocess a composite surface layer on the basis of Fe-Cr-C alloy. Technology of composite surface layer guarantee mainly increase inhardness and aberasive wear resistance of cast steel castings on machine elements. This technology can be competition for generallyapplied welding technology (surfacing by welding and thermal spraying. In range of studies was made cast steel test castings withcomposite surface layer, which usability for industrial applications was estimated by criterion of hardness and aberasive wear resistance of type metal-mineral and quality of joint cast steel – (Fe-Cr-C. Based on conducted studies a thesis, that composite surface layer arise from liquid state, was formulated. Moreover, possible is control of composite layer thickness and its hardness by suitable selection of parameters i.e. thickness of insert, pouring temperature and solidification modulus of casting. Possibility of technology application of composite surface layer in manufacture of cast steel slide bush for combined cutter loader is presented.

  2. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-05-12

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  3. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  4. Integrated Safety Culture Model and Application

    Institute of Scientific and Technical Information of China (English)

    汪磊; 孙瑞山; 刘汉辉

    2009-01-01

    A new safety culture model is constructed and is applied to analyze the correlations between safety culture and SMS. On the basis of previous typical definitions, models and theories of safety culture, an in-depth analysis on safety culture's structure, composing elements and their correlations was conducted. A new definition of safety culture was proposed from the perspective of sub-cuhure. 7 types of safety sub-culture, which are safety priority culture, standardizing culture, flexible culture, learning culture, teamwork culture, reporting culture and justice culture were defined later. Then integrated safety culture model (ISCM) was put forward based on the definition. The model divided safety culture into intrinsic latency level and extrinsic indication level and explained the potential relationship between safety sub-culture and all safety culture dimensions. Finally in the analyzing of safety culture and SMS, it concluded that positive safety culture is the basis of im-plementing SMS effectively and an advanced SMS will improve safety culture from all around.

  5. Applications of Molecular and Materials Modeling

    Science.gov (United States)

    2002-01-01

    nittalabo-e.html Osaka University, Institute for Protein Research Protein modeling Prof. Haruki Nakamura http://www.protein.osaka- u.ac.jp/kessho/members...band structure of YH3. Phys. Rev. B 61, 16491- 16496. Nagashima, U., S. Obara, K. Murakami , T. Yoshii, S. Shirakawa, T. Amisake, K. Kitamura, O. Kitao

  6. Applications products of aviation forecast models

    Science.gov (United States)

    Garthner, John P.

    1988-01-01

    A service called the Optimum Path Aircraft Routing System (OPARS) supplies products based on output data from the Naval Oceanographic Global Atmospheric Prediction System (NOGAPS), a model run on a Cyber-205 computer. Temperatures and winds are extracted from the surface to 100 mb, approximately 55,000 ft. Forecast winds are available in six-hour time steps.

  7. The application of an empowerment model

    NARCIS (Netherlands)

    Molleman, E; van Delft, B; Slomp, J

    2001-01-01

    In this study we applied an empowerment model that focuses on (a) the need for empowerment in light of organizational strategy, (b) job design issues such as job enlargement and job enrichment that facilitate empowerment, and (c) the abilities, and (d) the attitudes of workers that make empowerment

  8. A cutting force model for micromilling applications

    DEFF Research Database (Denmark)

    Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2006-01-01

    In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....

  9. A marketing model: applications for dietetic professionals.

    Science.gov (United States)

    Parks, S C; Moody, D L

    1986-01-01

    Traditionally, dietitians have communicated the availability of their services to the "public at large." The expectation was that the public would respond favorably to nutrition programs simply because there was a consumer need for them. Recently, however, both societal and consumer needs have changed dramatically, making old communication strategies ineffective and obsolete. The marketing discipline has provided a new model and new decision-making tools for many health professionals to use to more effectively make their services known to multiple consumer groups. This article provides one such model as applied to the dietetic profession. The model explores a definition of the business of dietetics, how to conduct an analysis of the environment, and, finally, the use of both in the choice of new target markets. Further, the model discusses the major components of developing a marketing strategy that will help the practitioner to be competitive in the marketplace. Presented are strategies for defining and re-evaluating the mission of the profession, for using future trends to identify new markets and roles for the profession, and for developing services that make the profession more competitive by better meeting the needs of the consumer.

  10. Deposit 3D modeling and application

    Institute of Scientific and Technical Information of China (English)

    LUO Zhou-quan; LIU Xiao-ming; SU Jia-hong; WU Ya-bin; LIU Wang-ping

    2007-01-01

    By the aid of the international mining software SURPAC, a geologic database for a multi-metal mine was established, 3D models of the surface, geologic fault, ore body, cavity and the underground openings were built, and the volume of the cavity of the mine based on the cavity 3D model was calculated. In order to compute the reserves, a grade block model was built and each metal element grade was estimated using Ordinary Kriging. Then, the reserve of each metal element and every sublevel of the mine was worked out. Finally, the calculated result of each metal reserve to its actual prospecting reserve was compared, and the results show that they are all almost equal to each other. The absolute errors of Sn, Pb, and Zn reserves are only 1.45%, 1.59% and 1.62%,respectively. Obviously, the built models are reliable and the calculated results of reserves are correct. They can be used to assist the geologic and mining engineers of the mine to do research work of reserves estimation, mining design, plan making and so on.

  11. Adaptable Multivariate Calibration Models for Spectral Applications

    Energy Technology Data Exchange (ETDEWEB)

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  12. Modeling Environmental Concern: Theory and Application.

    Science.gov (United States)

    Hackett, Paul M. W.

    1993-01-01

    Human concern for the quality and protection of the natural environment forms the basis of successful environmental conservation activities. Considers environmental concern research and proposes a model that incorporates the multiple dimensions of research through which environmental concern may be evaluated. (MDH)

  13. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  14. Molecular modeling and multiscaling issues for electronic material applications

    CERN Document Server

    Iwamoto, Nancy; Yuen, Matthew; Fan, Haibo

    Volume 1 : Molecular Modeling and Multiscaling Issues for Electronic Material Applications provides a snapshot on the progression of molecular modeling in the electronics industry and how molecular modeling is currently being used to understand material performance to solve relevant issues in this field. This book is intended to introduce the reader to the evolving role of molecular modeling, especially seen through the eyes of the IEEE community involved in material modeling for electronic applications.  Part I presents  the role that quantum mechanics can play in performance prediction, such as properties dependent upon electronic structure, but also shows examples how molecular models may be used in performance diagnostics, especially when chemistry is part of the performance issue.  Part II gives examples of large-scale atomistic methods in material failure and shows several examples of transitioning between grain boundary simulations (on the atomistic level)and large-scale models including an example ...

  15. A review on the application of modified continuum models in modeling and simulation of nanostructures

    Science.gov (United States)

    Wang, K. F.; Wang, B. L.; Kitamura, T.

    2016-02-01

    Analysis of the mechanical behavior of nanostructures has been very challenging. Surface energy and nonlocal elasticity of materials have been incorporated into the traditional continuum analysis to create modified continuum mechanics models. This paper reviews recent advancements in the applications of such modified continuum models in nanostructures such as nanotubes, nanowires, nanobeams, graphenes, and nanoplates. A variety of models for these nanostructures under static and dynamic loadings are mentioned and reviewed. Applications of surface energy and nonlocal elasticity in analysis of piezoelectric nanomaterials are also mentioned. This paper provides a comprehensive introduction of the development of this area and inspires further applications of modified continuum models in modeling nanomaterials and nanostructures.

  16. [Watershed water environment pollution models and their applications: a review].

    Science.gov (United States)

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  17. Potential model application and planning issues

    Directory of Open Access Journals (Sweden)

    Christiane Weber

    2000-03-01

    Full Text Available Le modèle de potentiel a été et reste un modèle d'interaction spatiale utilisé pour diverses problématiques en sciences humaines, cependant l'utilisation qu'en ont fait Donnay (1997,1995,1994 et Binard (1995 en introduisant des résultats de traitement d'images comme support d'application a ouvert la voie à des applications novatrice par exemple, pour la détermination de la limite urbaine ou des hinterlands locaux. Les articulations possibles entre application du modèle de potentiel en imagerie et utilisation de plans de Système d'Information Géographique ont permis l'évaluation temporelle des tendances de développement urbain (Weber,1998. Reprenant cette idée, l'étude proposée tente d'identifier les formes de développement urbain de la Communauté urbaine de Strasbourg (CUS en tenant compte de l'occupation du sol, des caractéristiques des réseaux de communication, des réglementations urbaines et des contraintes environnementales qui pèsent sur la zone d'étude. L'état initial de l'occupation du sol, obtenu par traitements statistiques, est utilisé comme donnée d'entrée du modèle de potentiel afin d'obtenir des surfaces de potentiel associées à des caractéristiques spatiales spécifiques soit  : l'extension de la forme urbaine, la préservation des zones naturelles ou d'agricultures, ou encore les réglementations. Les résultats sont ensuite combinés et classés. Cette application a été menée pour confronter la méthode au développement réel de la CUS déterminé par une étude diachronique par comparaison d'images satellites (SPOT1986- SPOT1998. Afin de vérifier l'intérêt et la justesse de la méthode les résultats satellites ont été opposés à ceux issus de la classification des surfaces de potentiel. Les zones de développement identifiées en fonction du modèle de potentiel ont été confirmées par les résultats de l'analyse temporelle faite sur les images. Une différenciation de zones en

  18. Adaptive Networks Theory, Models and Applications

    CERN Document Server

    Gross, Thilo

    2009-01-01

    With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.

  19. Application of Digital Terrain Model to volcanology

    Directory of Open Access Journals (Sweden)

    V. Achilli

    2006-06-01

    Full Text Available Three-dimensional reconstruction of the ground surface (Digital Terrain Model, DTM, derived by airborne GPS photogrammetric surveys, is a powerful tool for implementing morphological analysis in remote areas. High accurate 3D models, with submeter elevation accuracy, can be obtained by images acquired at photo scales between 1:5000-1:20000. Multitemporal DTMs acquired periodically over volcanic area allow the monitoring of areas interested by crustal deformations and the evaluation of mass balance when large instability phenomena or lava flows have occurred. The work described the results obtained from the analysis of photogrammetric data collected over the Vulcano Island from 1971 to 2001. The data, processed by means of the Digital Photogrammetry Workstation DPW 770, provided DTM with accuracy ranging between few centimeters to few decimeters depending on the geometric image resolution, terrain configuration and quality of photographs.

  20. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  1. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  2. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  3. Application of an analytical phase transformation model

    Institute of Scientific and Technical Information of China (English)

    LIU Feng; WANG Hai-feng; YANG Chang-lin; CHEN Zheng; YANG Wei; YANG Gen-cang

    2006-01-01

    Employing isothermal and isochronal differential scanning calorimetry, an analytical phase transformation model was used to study the kinetics of crystallization of amorphous Mg82.3Cu17.7 and Pd40Cu30P20Ni10 alloys. The analytical model comprised different combinations of various nucleation and growth mechanisms for a single transformation. Applying different combinations of nucleation and growth mechanisms, the nucleation and growth modes and the corresponding kinetic and thermodynamic parameters, have been determined. The influence of isothermal pre-annealing on subsequent isochronal crystallization kinetics with the increase of pre-annealing can be analyzed. The results show that the changes of the growth exponent, n, and the effective overall activation energy Q, occurring as function of the degree of transformation, do not necessarily imply a change of nucleation and growth mechanisms, i.e. such changes can occur while the transformation is isokinetic.

  4. Measurement-based load modeling: Theory and application

    Institute of Scientific and Technical Information of China (English)

    MA; Jin; HAN; Dong; HE; RenMu

    2007-01-01

    Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.

  5. Development and application of earth system models.

    Science.gov (United States)

    Prinn, Ronald G

    2013-02-26

    The global environment is a complex and dynamic system. Earth system modeling is needed to help understand changes in interacting subsystems, elucidate the influence of human activities, and explore possible future changes. Integrated assessment of environment and human development is arguably the most difficult and most important "systems" problem faced. To illustrate this approach, we present results from the integrated global system model (IGSM), which consists of coupled submodels addressing economic development, atmospheric chemistry, climate dynamics, and ecosystem processes. An uncertainty analysis implies that without mitigation policies, the global average surface temperature may rise between 3.5 °C and 7.4 °C from 1981-2000 to 2091-2100 (90% confidence limits). Polar temperatures, absent policy, are projected to rise from about 6.4 °C to 14 °C (90% confidence limits). Similar analysis of four increasingly stringent climate mitigation policy cases involving stabilization of greenhouse gases at various levels indicates that the greatest effect of these policies is to lower the probability of extreme changes. The IGSM is also used to elucidate potential unintended environmental consequences of renewable energy at large scales. There are significant reasons for attention to climate adaptation in addition to climate mitigation that earth system models can help inform. These models can also be applied to evaluate whether "climate engineering" is a viable option or a dangerous diversion. We must prepare young people to address this issue: The problem of preserving a habitable planet will engage present and future generations. Scientists must improve communication if research is to inform the public and policy makers better.

  6. Voronoi cell patterns: Theoretical model and applications

    Science.gov (United States)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  7. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  8. Modelling for Bio-,Agro- and Pharma-Applications

    DEFF Research Database (Denmark)

    2011-01-01

    approach for meso and microscale partial models. The specific case study of codeine release is examined. As a bio- application, a batch fermentation process is modelled. This involves the generation of a pre-cursor compound for insulin production.The plant involves a number of coupled unit operations...

  9. Applications of the Linear Logistic Test Model in Psychometric Research

    Science.gov (United States)

    Kubinger, Klaus D.

    2009-01-01

    The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…

  10. Applications of GARCH models to energy commodities

    Science.gov (United States)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric

  11. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  12. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...

  13. Solutions manual to accompany finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    A solutions manual to accompany Finite Mathematics: Models and Applications In order to emphasize the main concepts of each chapter, Finite Mathematics: Models and Applications features plentiful pedagogical elements throughout such as special exercises, end notes, hints, select solutions, biographies of key mathematicians, boxed key principles, a glossary of important terms and topics, and an overview of use of technology. The book encourages the modeling of linear programs and their solutions and uses common computer software programs such as LINDO. In addition to extensive chapters on pr

  14. Management Model Applicable to Metallic Materials Industry

    Directory of Open Access Journals (Sweden)

    Adrian Ioana

    2013-02-01

    Full Text Available This paper presents an algorithmic analysis of the marketing mix in metallurgy. It also analyzes the main correlations and their optimizing possibilities through an efficient management. Thus, both the effect and the importance of the marketing mix, for components (the four “P-s” areanalyzed in the materials’ industry, but their correlations as well, with the goal to optimize the specific management. There are briefly presented the main correlations between the 4 marketing mix components (the 4 “P-s” for a product within the materials’ industry, including aspects regarding specific management.Keywords: Management Model, Materials Industry, Marketing Mix, Correlations.

  15. MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    CHAHINE Georges L.; HSIAO Chao-Tsung

    2012-01-01

    Controlling mierobubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge,which can be achieved only through a combination of experimental and numerical/analytical techniques.The present communication presents a multi-physics approach to study the dynamics combining viscousinviseid effects,liquid and structure dynamics,and multi bubble interaction.While complex numerical tools are developed and used,the study aims at identifying the key parameters influencing the dynamics,which need to be included in simpler models.

  16. Polarimetric clutter modeling: Theory and application

    Science.gov (United States)

    Kong, J. A.; Lin, F. C.; Borgeaud, M.; Yueh, H. A.; Swartz, A. A.; Lim, H. H.; Shim, R. T.; Novak, L. M.

    1988-01-01

    The two-layer anisotropic random medium model is used to investigate fully polarimetric scattering properties of earth terrain media. The polarization covariance matrices for the untilted and tilted uniaxial random medium are evaluated using the strong fluctuation theory and distorted Born approximation. In order to account for the azimuthal randomness in the growth direction of leaves in tree and grass fields, an averaging scheme over the azimuthal direction is also applied. It is found that characteristics of terrain clutter can be identified through the analysis of each element of the covariance matrix. Theoretical results are illustrated by the comparison with experimental data provided by MIT Lincoln Laboratory for tree and grass fields.

  17. Optimal control application to an Ebola model

    Institute of Scientific and Technical Information of China (English)

    Ebenezer Bonyah; Kingsley Badu; Samuel Kwesi Asiedu-Addo

    2016-01-01

    Ebola virus is a severe,frequently fatal illness,with a case fatality rate up to 90%.The outbreak of the disease has been acknowledged by World Health Organization as Public Health Emergency of International Concern.The threat of Ebola in West Africa is still a major setback to the socioeconomic development.Optimal control theory is applied to a system of ordinary differential equations which is modeling Ebola infection through three different routes including contact between humans and a dead body.In an attempt to reduce infection in susceptible population,a preventive control is put in the form of education and campaign and two treatment controls are applied to infected and late-stage infected(super) human population.The Pontryagins maximum principle is employed to characterize optimality control,which is then solved numerically.It is observed that time optimal control is existed in the model.The activation of each control showed a positive reduction of infection.The overall effect of activation of all the controls simultaneously reduced the effort required for the reduction of the infection quickly.The obtained results present a good framework for planning and designing cost-effective strategies for good interventions in dealing with Ebola disease.It is established that in order to reduce Ebola threat all the three controls must be taken into consideration concurrently.

  18. NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN

    Directory of Open Access Journals (Sweden)

    Petr Chmátal

    2016-04-01

    Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.

  19. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  20. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  1. Interconnected hydro-thermal systems - Models, methods, and applications

    DEFF Research Database (Denmark)

    Hindsberger, Magnus

    2003-01-01

    , it has been analysed how the Balmorel model can be used to create inputs related to transmissions and/or prices to a more detailed production scheduling model covering a subsystem of the one represented in the Balmorel model. As an example of application of the Balmorel model, the dissertation presents...... results of an environmental policy analysis concerning the possible reduction of CO2, the promotion of renewable energy, and the costs associated with these aspects. Another topic is stochastic programming. A multistage stochastic model has been formulated of the Nordic power system. This allows analyses...

  2. Solar radiation practical modeling for renewable energy applications

    CERN Document Server

    Myers, Daryl Ronald

    2013-01-01

    Written by a leading scientist with over 35 years of experience working at the National Renewable Energy Laboratory (NREL), Solar Radiation: Practical Modeling for Renewable Energy Applications brings together the most widely used, easily implemented concepts and models for estimating broadband and spectral solar radiation data. The author addresses various technical and practical questions about the accuracy of solar radiation measurements and modeling. While the focus is on engineering models and results, the book does review the fundamentals of solar radiation modeling and solar radiation m

  3. Flood risk assessment: concepts, modelling, applications

    Directory of Open Access Journals (Sweden)

    G. Tsakiris

    2014-01-01

    Full Text Available Natural hazards have caused severe consequences to the natural, modified and human systems, in the past. These consequences seem to increase with time due to both higher intensity of the natural phenomena and higher value of elements at risk. Among the water related hazards flood hazards have the most destructive impacts. The paper presents a new systemic paradigm for the assessment of flood hazard and flood risk in the riverine flood prone areas. Special emphasis is given to the urban areas with mild terrain and complicated topography, in which 2-D fully dynamic flood modelling is proposed. Further the EU flood directive is critically reviewed and examples of its implementation are presented. Some critical points in the flood directive implementation are also highlighted.

  4. Application of Generic Disposal System Models

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.

  5. An overview of the optimization modelling applications

    Science.gov (United States)

    Singh, Ajay

    2012-10-01

    SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.

  6. Modeling the Evolution of Incised Streams: III. Model Application

    Science.gov (United States)

    Incision and ensuing widening of alluvial stream channels is widespread in the midsouth and midwestern United States and represents an important form of channel adjustment. Two accompanying papers have presented a robust computational model for simulating the long-term evolution of incised and resto...

  7. Model-checking mean-field models: algorithms & applications

    NARCIS (Netherlands)

    Kolesnichenko, Anna Victorovna

    2014-01-01

    Large systems of interacting objects are highly prevalent in today's world. In this thesis we primarily address such large systems in computer science. We model such large systems using mean-field approximation, which allows to compute the limiting behaviour of an infinite population of identical o

  8. NDA SYSTEM RESPONSE MODELING AND ITS APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    Vinson, D.

    2010-03-01

    is of the form of uranyl fluoride that will become hydrated on exposure to moisture in air when the systems are no longer buffered. The deposit geometry and thickness is uncertain and variable. However, a reasonable assessment of the level of material holdup in this equipment is necessary to support decommissioning efforts. The assessment of nuclear material holdup in process equipment is a complex process that requires integration of process knowledge, nondestructive assay (NDA) measurements, and computer modeling to maximize capabilities and minimize uncertainty. The current report is focused on the use of computer modeling and simulation of NDA measurements.

  9. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  10. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  11. DATA MODEL CUSTOMIZATION FOR YII BASED ERP APPLICATION

    Directory of Open Access Journals (Sweden)

    Andre Leander

    2014-01-01

    Full Text Available As UD. Logam Utama’s business grow, trigger the need of fast and accurate information in order to improve performance, efficiency, control and company’s values. The company needs a system that can integrate each functional area. ERP has centralized database and able to be configured, according to company’s business processes.First phase of application development is analysis and design the company’s business processes. The design phase produce a number of models that will be used to created application.The final result of application development is an ERP application that can be configured with the company’s business process. The ERP application consist of warehouse or production module, purchasing module, sales module, and accounting module.

  12. Modelling of a cross flow evaporator for CSP application

    DEFF Research Database (Denmark)

    Sørensen, Kim; Franco, Alessandro; Pelagotti, Leonardo

    2016-01-01

    Heat exchangers consisting of bundles of horizontal plain tubes with boiling on the shell side are widely used in industrial and energy systems applications. A recent particular specific interest for the use of this special heat exchanger is in connection with Concentrated Solar Power (CSP......) applications. Heat transfer and pressure drop prediction methods are an important tool for design and modelling of diabatic, two-phase, shell-side flow over a horizontal plain tubes bundle for a vertical up-flow evaporator. With the objective of developing a model for a specific type of cross flow evaporator...... for a coil type steam generator specifically designed for solar applications, this paper analyzes the use of several heat transfer, void fraction and pressure drop correlations for the modelling the operation of such a type of steam generator. The paper after a brief review of the literature about...

  13. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  14. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  15. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  16. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  17. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  18. WWW Business Applications Based on the Cellular Model

    Institute of Scientific and Technical Information of China (English)

    Toshio Kodama; Tosiyasu L. Kunii; Yoichi Seki

    2008-01-01

    A cellular model based on the Incrementally Modular Abstraction Hierarchy (IMAH) is a novel model that can represent the architecture of and changes in cyberworlds, preserving invariants from a general level to a specific one. We have developed a data processing system called the Cellular Data System (CDS). In the development of business applications, you can prevent combinatorial explosion in the process of business design and testing by using CDS. In this paper, we have first designed and implemented wide-use algebra on the presentation level. Next, we have developed and verified the effectiveness of two general business applications using CDS: 1) a customer information management system, and 2) an estimate system.

  19. Handbook of Real-World Applications in Modeling and Simulation

    CERN Document Server

    Sokolowski, John A

    2012-01-01

    This handbook provides a thorough explanation of modeling and simulation in the most useful, current, and predominant applied areas, such as transportation, homeland security, medicine, operational research, military science, and business modeling.  The authors offer a concise look at the key concepts and techniques of modeling and simulation and then discuss how and why the presented domains have become leading applications.  The book begins with an introduction of why modeling and simulation is a reliable analysis assessment tool for complex syste

  20. Model-Driven Engineering Support for Building C# Applications

    Science.gov (United States)

    Derezińska, Anna; Ołtarzewski, Przemysław

    Realization of Model-Driven Engineering (MDE) vision of software development requires a comprehensive and user-friendly tool support. This paper presents a UML-based approach for building trustful C# applications. UML models are refined using profiles for assigning class model elements to C# concepts and to elements of implementation project. Stereotyped elements are verified on life and during model to code transformation in order to prevent creation of an incorrect code. The Transform OCL Fragments into C# system (T.O.F.I.C.) was created as a feature of the Eclipse environment. The system extends the IBM Rational Software Architect tool.

  1. Applicability of cooperative learning model in gastronomy education

    OpenAIRE

    SARIOĞLAN, Mehmet; CEVİZKAYA, Gülhan

    2015-01-01

    The purpose of the study is to reveal of “Cooperative learning model’s applicability which is one of the vital models of gastronomy. Learning model that is based on cooperativisim, have importance for students in terms of being successful in their group Works at gastronomy education. This study divides into two parts, one is “literature” and other is “model proposal”. At scanning of “literature” is going to be focused on cooperative learning model gastronomy education’s description. In the se...

  2. Applicability of cooperative learning model in gastronomy education

    OpenAIRE

    SARIOĞLAN, Mehmet; CEVİZKAYA, Gülhan

    2016-01-01

    The purpose of the study is to reveal of “Cooperative learning model’s applicability which is one of the vital models of gastronomy. Learning model that is based on cooperativisim, have importance for students in terms of being successful in their group Works at gastronomy education. This study divides into two parts, one is “literature” and other is “model proposal”. At scanning of “literature” is going to be focused on cooperative learning model gastronomy education’s description. In the se...

  3. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  4. A PROPOSED HYBRID AGILE FRAMEWORK MODEL FOR MOBILE APPLICATIONS DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Ammar Khader Almasri

    2016-03-01

    Full Text Available With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.

  5. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

    Science.gov (United States)

    Consonni, Guido; Paroli, Roberta

    2016-10-04

    In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

  6. CAD-model-based vision for space applications

    Science.gov (United States)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  7. Constitutive Modeling of Geomaterials Advances and New Applications

    CERN Document Server

    Zhang, Jian-Min; Zheng, Hong; Yao, Yangping

    2013-01-01

    The Second International Symposium on Constitutive Modeling of Geomaterials: Advances and New Applications (IS-Model 2012), is to be held in Beijing, China, during October 15-16, 2012. The symposium is organized by Tsinghua University, the International Association for Computer Methods and Advances in Geomechanics (IACMAG), the Committee of Numerical and Physical Modeling of Rock Mass, Chinese Society for Rock Mechanics and Engineering, and the Committee of Constitutive Relations and Strength Theory, China Institution of Soil Mechanics and Geotechnical Engineering, China Civil Engineering Society. This Symposium follows the first successful International Workshop on Constitutive Modeling held in Hong Kong, which was organized by Prof. JH Yin in 2007.   Constitutive modeling of geomaterials has been an active research area for a long period of time. Different approaches have been used in the development of various constitutive models. A number of models have been implemented in the numerical analyses of geote...

  8. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    , polymers, mixtures as well as separation processes. The presentation will highlight the framework (ICAS software) for property modeling, the property models and issues such as prediction accuracy, flexibility, maintenance and updating of the database. Also, application issues related to the use of property......, they are not always available. Also, it may be too expensive to measure them or it may take too much time. In these situations and when repetitive calculations are involved (as in process simulation), it is useful to have appropriate models to reliably predict the needed properties. A collection of methods tools...... such as database, property model library, model parameter regression, and, property-model based product-process design will be presented. The database contains pure component and mixture data for a wide range of organic chemicals. The property models are based on the combined group contribution and atom...

  9. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  10. Linear Characteristic Graphical Models: Representation, Inference and Applications

    CERN Document Server

    Bickson, Danny

    2010-01-01

    Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L\\'evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-...

  11. 12th Workshop on Stochastic Models, Statistics and Their Applications

    CERN Document Server

    Rafajłowicz, Ewaryst; Szajowski, Krzysztof

    2015-01-01

    This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

  12. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  13. Application of dimensional analysis in systems modeling and control design

    CERN Document Server

    Balaguer, Pedro

    2013-01-01

    Dimensional analysis is an engineering tool that is widely applied to numerous engineering problems, but has only recently been applied to control theory and problems such as identification and model reduction, robust control, adaptive control, and PID control. Application of Dimensional Analysis in Systems Modeling and Control Design provides an introduction to the fundamentals of dimensional analysis for control engineers, and shows how they can exploit the benefits of the technique to theoretical and practical control problems.

  14. TRANSLATOR OF FINITE STATE MACHINE MODEL PARAMETERS FROM MATLAB ENVIRONMENT INTO HUMAN-MACHINE INTERFACE APPLICATION

    OpenAIRE

    2012-01-01

    Technology and means for automatic translation of FSM model parameters from Matlab application to human-machine interface application is proposed. The example of technology application to the electric apparatus model is described.

  15. Nonlinear Mathematical Modeling in Pneumatic Servo Position Applications

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Valdiero

    2011-01-01

    Full Text Available This paper addresses a new methodology for servo pneumatic actuators mathematical modeling and selection from the dynamic behavior study in engineering applications. The pneumatic actuator is very common in industrial application because it has the following advantages: its maintenance is easy and simple, with relatively low cost, self-cooling properties, good power density (power/dimension rate, fast acting with high accelerations, and installation flexibility. The proposed fifth-order nonlinear mathematical model represents the main characteristics of this nonlinear dynamic system, as servo valve dead zone, air flow-pressure relationship through valve orifice, air compressibility, and friction effects between contact surfaces in actuator seals. Simulation results show the dynamic performance for different pneumatic cylinders in order to see which features contribute to a better behavior of the system. The knowledge of this behavior allows an appropriate choice of pneumatic actuator, mainly contributing to the success of their precise control in several applications.

  16. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  17. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto;

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip...

  18. On the applicability of models for outdoor sound (A)

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as parabolic equation methods, FFP methods, and boundary element methods all have advantages in certain situations. These approaches are accurate and predict...

  19. On the applicability of models for outdoor sound

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as Parabolic Equation methods, FFP methods and Boundary Element Methods all have advantages in certain situations. These approaches are accurate and predict ...

  20. Optimization of Process Parameters During Drilling of Glass-Fiber Polyester Reinforced Composites Using DOE and ANOVA

    Directory of Open Access Journals (Sweden)

    N.S. Mohan

    2010-09-01

    Full Text Available Polymer-based composite material possesses superior properties such as high strength-to-weight ratio, stiffness-to-weight ratio and good corrosive resistance and therefore, is attractive for high performance applications such as in aerospace, defense and sport goods industries. Drilling is one of the indispensable methods for building products with composite panels. Surface quality and dimensional accuracy play an important role in the performance of a machined component. In machining processes, however, the quality of the component is greatly influenced by the cutting conditions, tool geometry, tool material, machining process, chip formation, work piece material, tool wear and vibration during cutting. Drilling tests were conducted on glass fiber reinforced plastic composite [GFRP] laminates using an instrumented CNC milling center. A series of experiments are conducted using TRIAC VMC CNC machining center to correlate the cutting parameters and material parameters on the cutting thrust, torque and surface roughness. The measured results were collected and analyzed with the help of the commercial software packages MINITAB14 and Taly Profile. The surface roughness of the drilled holes was measured using Rank Taylor Hobson Surtronic 3+ instrument. The method could be useful in predicting thrust, torque and surface roughness parameters as a function of process variables. The main objective is to optimize the process parameters to achieve low cutting thrust, torque and good surface roughness. From the analysis it is evident that among all the significant parameters, speed and drill size have significant influence cutting thrust and drill size and specimen thickness on the torque and surface roughness. It was also found that feed rate does not have significant influence on the characteristic output of the drilling process.

  1. Analysis and Application for Integrity Model on Trusted Platform

    Institute of Scientific and Technical Information of China (English)

    TU Guo-qing; ZHANG Huan-guo; WANG Li-na; YU Dan-dan

    2005-01-01

    To build a trusted platform based on Trusted Computing Platform Alliance (TCPA) ' s recommendation,we analyze the integrity mechanism for such a PC platform in this paper. By combinning access control model with information flow model, we put forward a combined process-based lattice model to enforce security. This model creates a trust chain by which we can manage a series of processes from a core root of trust module to some other application modules.In the model, once the trust chain is created and managed correctly,the integrity of the computer's hardware and sofware has been mainfained, so does the confidentiality and authenticity. Moreover, a relevant implementation of the model is explained.

  2. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  3. Cascaded process model based control: packed absorption column application.

    Science.gov (United States)

    Govindarajan, Anand; Jayaraman, Suresh Kumar; Sethuraman, Vijayalakshmi; Raul, Pramod R; Rhinehart, R Russell

    2014-03-01

    Nonlinear, adaptive, process-model based control is demonstrated in a cascaded single-input-single-output mode for pressure drop control in a pilot-scale packed absorption column. The process is shown to be nonlinear. Control is demonstrated in both servo and regulatory modes, for no wind-up in a constrained situation, and for bumpless transfer. Model adaptation is demonstrated and shown to provide process insight. The application procedure is revealed as a design guide to aid others in implementing process-model based control.

  4. Applications of computer modeling at Wrightson, Johnson, Haddon & Williams, Inc

    Science.gov (United States)

    Johnson, James A.; Seep, Benjamin C.

    2002-05-01

    Computer modeling has become useful as an investigative tool and as a client communication and explanation tool in the field of acoustical consulting. A variety of in-house developed and commercially available applications is in constant use at the firm of Wrightson, Johnson, Haddon & Williams. Examples likely to be demonstrated (depending on time) include use of digital filtering for building exterior noise reduction comparisons, a shell isolation rating (SIR) model, simple sound barrier programs, an HVAC spreadsheet, a visual sightline modeling tool, specular sound reflections in a semicircular arc, and some uses of CATT-acoustic auralizations.

  5. The Application of the Jerome Model and the Horace Model in Translation Practice

    Institute of Scientific and Technical Information of China (English)

    WU Jiong

    2015-01-01

    The Jerome model and the Horace model have a great influence on translation theories and practice from ancient times. This paper starts from a comparative study of the two models, and mainly discusses similarities, differences and weakness of them. And then, through the case study, it analyzes the application of the two models to English-Chinese translation. In the end, it draws a conclusion that generally accepted translation criterion does not exist, different types of texts require different transla⁃tion criterion.

  6. Genome Editing and Its Applications in Model Organisms.

    Science.gov (United States)

    Ma, Dongyuan; Liu, Feng

    2015-12-01

    Technological advances are important for innovative biological research. Development of molecular tools for DNA manipulation, such as zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and the clustered regularly-interspaced short palindromic repeat (CRISPR)/CRISPR-associated (Cas), has revolutionized genome editing. These approaches can be used to develop potential therapeutic strategies to effectively treat heritable diseases. In the last few years, substantial progress has been made in CRISPR/Cas technology, including technical improvements and wide application in many model systems. This review describes recent advancements in genome editing with a particular focus on CRISPR/Cas, covering the underlying principles, technological optimization, and its application in zebrafish and other model organisms, disease modeling, and gene therapy used for personalized medicine.

  7. Genome Editing and Its Applications in Model Organisms

    Directory of Open Access Journals (Sweden)

    Dongyuan Ma

    2015-12-01

    Full Text Available Technological advances are important for innovative biological research. Development of molecular tools for DNA manipulation, such as zinc finger nucleases (ZFNs, transcription activator-like effector nucleases (TALENs, and the clustered regularly-interspaced short palindromic repeat (CRISPR/CRISPR-associated (Cas, has revolutionized genome editing. These approaches can be used to develop potential therapeutic strategies to effectively treat heritable diseases. In the last few years, substantial progress has been made in CRISPR/Cas technology, including technical improvements and wide application in many model systems. This review describes recent advancements in genome editing with a particular focus on CRISPR/Cas, covering the underlying principles, technological optimization, and its application in zebrafish and other model organisms, disease modeling, and gene therapy used for personalized medicine.

  8. Systems and models with anticipation in physics and its applications

    Science.gov (United States)

    Makarenko, A.

    2012-11-01

    Investigations of recent physics processes and real applications of models require the new more and more improved models which should involved new properties. One of such properties is anticipation (that is taking into accounting some advanced effects).It is considered the special kind of advanced systems - namely a strong anticipatory systems introduced by D. Dubois. Some definitions, examples and peculiarities of solutions are described. The main feature is presumable multivaluedness of the solutions. Presumable physical examples of such systems are proposed: self-organization problems; dynamical chaos; synchronization; advanced potentials; structures in micro-, meso- and macro- levels; cellular automata; computing; neural network theory. Also some applications for modeling social, economical, technical and natural systems are described.

  9. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  10. Model-Driven Development of Automation and Control Applications: Modeling and Simulation of Control Sequences

    Directory of Open Access Journals (Sweden)

    Timo Vepsäläinen

    2014-01-01

    Full Text Available The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process.

  11. Application distribution model and related security attacks in VANET

    Science.gov (United States)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  12. Monte Carlo modelling of positron transport in real world applications

    Science.gov (United States)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  13. Structural Equation Modeling: Theory and Applications in Forest Management

    Directory of Open Access Journals (Sweden)

    Tzeng Yih Lam

    2012-01-01

    Full Text Available Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM. The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1 a path model representing the effects of successively higher layers of vegetation on late-seral herbs through processes such as light attenuation and (2 a measurement model accounting for measurement errors. The fitted SEM model suggested a direct negative effect of light attenuation on late-seral herbs cover but a direct positive effect of northern aspect. Moreover, many processes have indirect effects mediated through midstory vegetation. SEM is recommended as a forest management tool for designing silvicultural treatments and systems for attaining complex arrays of management objectives.

  14. Recent improvements in atmospheric environment models for Space Station applications

    Science.gov (United States)

    Anderson, B. Jeffrey; Suggs, Ronnie J.; Smith, Robert E.; Hickey, Michael; Catlett, Karen

    1991-01-01

    The capability of empirical models of the earth's thermosphere must continually be updated if they are to keep pace with their many applications in the aerospace industry. This paper briefly summarizes the progress of several such efforts in support of the Space Station Program. The efforts consists of the development of data bases, analytical studies of the data, and evaluation and intercomparison of thermosphere models. A geomagnetic storm model of Slowey does not compare as well to the MSIS-86 model as does the Marshall Engineering Thermosphere (MET). LDEF orbit decay data is used to evaluate the performance of the MET and MSIS-86 during a period of high solar activity; equal to or exceeding the highest levels that existed during the time of the original data sets upon which these models are based.

  15. Application of Metamodels to Identification of Metallic Materials Models

    Directory of Open Access Journals (Sweden)

    Maciej Pietrzyk

    2016-01-01

    Full Text Available Improvement of the efficiency of the inverse analysis (IA for various material tests was the objective of the paper. Flow stress models and microstructure evolution models of various complexity of mathematical formulation were considered. Different types of experiments were performed and the results were used for the identification of models. Sensitivity analysis was performed for all the models and the importance of parameters in these models was evaluated. Metamodels based on artificial neural network were proposed to simulate experiments in the inverse solution. Performed analysis has shown that significant decrease of the computing times could be achieved when metamodels substitute finite element model in the inverse analysis, which is the case in the identification of flow stress models. Application of metamodels gave good results for flow stress models based on closed form equations accounting for an influence of temperature, strain, and strain rate (4 coefficients and additionally for softening due to recrystallization (5 coefficients and for softening and saturation (7 coefficients. Good accuracy and high efficiency of the IA were confirmed. On the contrary, identification of microstructure evolution models, including phase transformation models, did not give noticeable reduction of the computing time.

  16. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  17. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  18. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  19. APPLICATION OF REGRESSION MODELLING TECHNIQUES IN DESALINATION OF SEA WATER BY MEMBRANE DISTILLATION

    Directory of Open Access Journals (Sweden)

    SELVI S. R

    2015-08-01

    Full Text Available The objective of this work is to gain an idea about the statistical significance of experimental parameters on the performance of membrane distillation. In this work the raw sea water sample without pretreatment was collected from Puducherry and desalinated using direct contact membrane distillation method. Experimental data analysis was carried out using statistical methods. The experimental data involves the effects of feed temperature, feed flow rate and feed concentration on the permeate flux. In statistical methods, regression model was developed to correlate the significance of input parameters like feed temperature, feed concentration and feed flow rate with the output parameter like permeate flux in the process of membrane distillation. Since the performance of the membrane distillation in the desalination of water is characterised by permeate flux, regression model using simple linear method was carried out. Goodness of model fitting should always has to be validated. Regression model was validated using ANOVA. Estimates of ANOVA for the parameter study was given and the coefficient obtained by regression analysis was specified in the regression equation and concluded that the highest coefficient of input parameter is significant, highly influences the response. Feed flow rate and feed temperature has higher influence on permeate flux than that of feed concentration. The coefficient of feed concentration was found to be negative which indicates less significant factor on permeate flux. The chemical composition of sea water was given by water quality analysis . TDS of membrane distilled water was found to be 18ppm than the initial feed TDS of sea water 27,720 ppm. From the experimental work it was found, salt rejection as 99% and water analysis report confirms the quality of distillate obtained by this desalination process as potable water.

  20. High-fidelity geometric modeling for biomedical applications

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zeyun [Univ. of California, San Diego, CA (United States). Dept. of Mathematics; Holst, Michael J. [Univ. of California, San Diego, CA (United States). Dept. of Mathematics; Andrew McCammon, J. [Univ. of California, San Diego, CA (United States). Dept. of Chemistry and Biochemistry; Univ. of California, San Diego, CA (United States). Dept. of Pharmacology

    2008-05-19

    In this paper, we describe a combination of algorithms for high-fidelity geometric modeling and mesh generation. Although our methods and implementations are application-neutral, our primary target application is multiscale biomedical models that range in scales across the molecular, cellular, and organ levels. Our software toolchain implementing these algorithms is general in the sense that it can take as input a molecule in PDB/PQR forms, a 3D scalar volume, or a user-defined triangular surface mesh that may have very low quality. The main goal of our work presented is to generate high quality and smooth surface triangulations from the aforementioned inputs, and to reduce the mesh sizes by mesh coarsening. Tetrahedral meshes are also generated for finite element analysis in biomedical applications. Experiments on a number of bio-structures are demonstrated, showing that our approach possesses several desirable properties: feature-preservation, local adaptivity, high quality, and smoothness (for surface meshes). Finally, the availability of this software toolchain will give researchers in computational biomedicine and other modeling areas access to higher-fidelity geometric models.

  1. Intelligent control based on intelligent characteristic model and its application

    Institute of Scientific and Technical Information of China (English)

    吴宏鑫; 王迎春; 邢琰

    2003-01-01

    This paper presents a new intelligent control method based on intelligent characteristic model for a kind of complicated plant with nonlinearities and uncertainties, whose controlled output variables cannot be measured on line continuously. The basic idea of this method is to utilize intelligent techniques to form the characteristic model of the controlled plant according to the principle of combining the char-acteristics of the plant with the control requirements, and then to present a new design method of intelli-gent controller based on this characteristic model. First, the modeling principles and expression of the intelligent characteristic model are presented. Then based on description of the intelligent characteristic model, the design principles and methods of the intelligent controller composed of several open-loops and closed-loops sub controllers with qualitative and quantitative information are given. Finally, the ap-plication of this method in alumina concentration control in the real aluminum electrolytic process is in-troduced. It is proved in practice that the above methods not only are easy to implement in engineering design but also avoid the trial-and-error of general intelligent controllers. It has taken better effect in the following application: achieving long-term stable control of low alumina concentration and increasing the controlled ratio of anode effect greatly from 60% to 80%.

  2. Language Model Applications to Spelling with Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Anderson Mora-Cortes

    2014-03-01

    Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  3. A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications

    CERN Document Server

    Wang, Jian-xun; Xiao, Heng

    2015-01-01

    Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...

  4. The DO ART Model: An Ethical Decision-Making Model Applicable to Art Therapy

    Science.gov (United States)

    Hauck, Jessica; Ling, Thomson

    2016-01-01

    Although art therapists have discussed the importance of taking a positive stance in terms of ethical decision making (Hinz, 2011), an ethical decision-making model applicable for the field of art therapy has yet to emerge. As the field of art therapy continues to grow, an accessible, theoretically grounded, and logical decision-making model is…

  5. MOGO: Model-Oriented Global Optimization of Petascale Applications

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  6. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  7. Mobile Cloud Computing: A Comparison of Application Models

    CERN Document Server

    Kovachev, Dejan; Klamma, Ralf

    2011-01-01

    Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can ...

  8. On Helical Projection and Its Application in Screw Modeling

    Directory of Open Access Journals (Sweden)

    Riliang Liu

    2014-04-01

    Full Text Available As helical surfaces, in their many and varied forms, are finding more and more applications in engineering, new approaches to their efficient design and manufacture are desired. To that end, the helical projection method that uses curvilinear projection lines to map a space object to a plane is examined in this paper, focusing on its mathematical model and characteristics in terms of graphical representation of helical objects. A number of interesting projective properties are identified in regard to straight lines, curves, and planes, and then the method is further investigated with respect to screws. The result shows that the helical projection of a cylindrical screw turns out to be a Jordan curve, which is determined by the screw's axial profile and number of flights. Based on the projection theory, a practical approach to the modeling of screws and helical surfaces is proposed and illustrated with examples, and its possible application in screw manufacturing is discussed.

  9. Powder consolidation using cold spray process modeling and emerging applications

    CERN Document Server

    Moridi, Atieh

    2017-01-01

    This book first presents different approaches to modeling of the cold spray process with the aim of extending current understanding of its fundamental principles and then describes emerging applications of cold spray. In the coverage of modeling, careful attention is devoted to the assessment of critical and erosion velocities. In order to reveal the phenomenological characteristics of interface bonding, severe, localized plastic deformation and material jet formation are studied. Detailed consideration is also given to the effect of macroscopic defects such as interparticle boundaries and subsequent splat boundary cracking on the mechanical behavior of cold spray coatings. The discussion of applications focuses in particular on the repair of damaged parts and additive manufacturing in various disciplines from aerospace to biomedical engineering. Key aspects include a systematic study of defect shape and the ability of cold spray to fill the defect, examination of the fatigue behavior of coatings for structur...

  10. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  11. Application of mesoscale modeling optimization to development of advanced materials

    Institute of Scientific and Technical Information of China (English)

    SONG Xiaoyan

    2004-01-01

    The rapid development of computer modeling in recent years offers opportunities for materials preparation in a more economic and efficient way. In the present paper, a practicable route for research and development of advanced materials by applying the visual and quantitative modeling technique on the mesoscale is introduced. A 3D simulation model is developed to describe the microstructure evolution during the whole process of deformation, recrystallization and grain growth in a material containing particles. In the light of simulation optimization, the long-term stabilized fine grain structures ideal for high-temperature applications are designed and produced. In addition, the feasibility, reliability and prospects of material development based on mesoscale modeling are discussed.

  12. Development and application of new quality model for software projects.

    Science.gov (United States)

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  13. Radiation Belt Environment Model: Application to Space Weather and Beyond

    Science.gov (United States)

    Fok, Mei-Ching H.

    2011-01-01

    Understanding the dynamics and variability of the radiation belts are of great scientific and space weather significance. A physics-based Radiation Belt Environment (RBE) model has been developed to simulate and predict the radiation particle intensities. The RBE model considers the influences from the solar wind, ring current and plasmasphere. It takes into account the particle drift in realistic, time-varying magnetic and electric field, and includes diffusive effects of wave-particle interactions with various wave modes in the magnetosphere. The RBE model has been used to perform event studies and real-time prediction of energetic electron fluxes. In this talk, we will describe the RBE model equation, inputs and capabilities. Recent advancement in space weather application and artificial radiation belt study will be discussed as well.

  14. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  15. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  16. Sensors advancements in modeling, design issues, fabrication and practical applications

    CERN Document Server

    Mukhopadhyay, Subhash Chandra

    2008-01-01

    Sensors are the most important component in any system and engineers in any field need to understand the fundamentals of how these components work, how to select them properly and how to integrate them into an overall system. This book has outlined the fundamentals, analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors, electromagnetic, capacitive, ultrasonic, vision, Terahertz, displacement, fibre-optic and so on. The book: addresses the identification, modeling, selection, operation and integration of a wide variety of se

  17. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  18. Fired Models of Air-gun Source and Its Application

    Institute of Scientific and Technical Information of China (English)

    Luo Guichun; Ge Hongkui; Wang Baoshan; Hu Ping; Mu Hongwang; Chen Yong

    2008-01-01

    Air-gun is an important active seismic source. With the development of the theory about air-gun array, the technique for air-gun array design becomes mature and is widely used in petroleum exploration and geophysics. In order to adapt it to different research domains,different combination and fired models are needed. At the present time, there are two firedmodels of air-gun source, namely, reinforced initial pulse and reinforced first bubble pulse.The fired time, space between single guns, frequency and resolution of the two models are different. This comparison can supply the basis for its extensive application.

  19. Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy

    2012-01-01

    . The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....

  20. Proposed Bilingual Model for Right to Left Language Applications

    Directory of Open Access Journals (Sweden)

    Farhan M Al Obisat

    2016-09-01

    Full Text Available Using right to left languages (RLL in software programming requires switching the direction of many components in the interface. Preserving the original interface layout and only changing the language may result in different semantics or interpretations of the content. However, this aspect is often dismissing in the field. This research, therefore, proposes a Bilingual Model (BL to check and correct the directions in social media applications. Moreover, test-driven development (TDD For RLL, such as Arabic, is considered in the testing methodologies. Similarly, the bilingual analysis has to follow both the TDD and BL models.

  1. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  2. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  3. Generalized Bogoliubov Polariton Model: An Application to Stock Exchange Market

    Science.gov (United States)

    Thuy Anh, Chu; Anh, Truong Thi Ngoc; Lan, Nguyen Tri; Viet, Nguyen Ai

    2016-06-01

    A generalized Bogoliubov method for investigation non-simple and complex systems was developed. We take two branch polariton Hamiltonian model in second quantization representation and replace the energies of quasi-particles by two distribution functions of research objects. Application to stock exchange market was taken as an example, where the changing the form of return distribution functions from Boltzmann-like to Gaussian-like was studied.

  4. House Price Risk Models for Banking and Insurance Applications

    OpenAIRE

    Katja Hanewald; Michael Sherris

    2011-01-01

    The recent international credit crisis has highlighted the significant exposure that banks and insurers, especially mono-line credit insurers, have to residential house price risk. This paper provides an assessment of risk models for residential property for applications in banking and insurance including pricing, risk management, and portfolio management. Risk factors and heterogeneity of house price returns are assessed at a postcode-level for house prices in the major capital city of Sydne...

  5. Mathematical problem solving, modelling, applications, and links to other subjects

    OpenAIRE

    Blum, Werner; Niss, Mogens

    1989-01-01

    The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of developm...

  6. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  7. Application of the Bayesian dynamic survival model in medicine.

    Science.gov (United States)

    He, Jianghua; McGee, Daniel L; Niu, Xufeng

    2010-02-10

    The Bayesian dynamic survival model (BDSM), a time-varying coefficient survival model from the Bayesian prospective, was proposed in early 1990s but has not been widely used or discussed. In this paper, we describe the model structure of the BDSM and introduce two estimation approaches for BDSMs: the Markov Chain Monte Carlo (MCMC) approach and the linear Bayesian (LB) method. The MCMC approach estimates model parameters through sampling and is computationally intensive. With the newly developed geoadditive survival models and software BayesX, the BDSM is available for general applications. The LB approach is easier in terms of computations but it requires the prespecification of some unknown smoothing parameters. In a simulation study, we use the LB approach to show the effects of smoothing parameters on the performance of the BDSM and propose an ad hoc method for identifying appropriate values for those parameters. We also demonstrate the performance of the MCMC approach compared with the LB approach and a penalized partial likelihood method available in software R packages. A gastric cancer trial is utilized to illustrate the application of the BDSM.

  8. Advance in Application of Regional Climate Models in China

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei; YAN Minhua; CHEN Panqin; XU Helan

    2008-01-01

    Regional climate models have become the powerful tools for simulating regional climate and its changeprocess and have been widely used in China. Using regional climate models, some research results have been obtainedon the following aspects: 1) the numerical simulation of East Asian monsoon climate, including exceptional monsoonprecipitation, summer precipitation distribution, East Asian circulation, multi-year climate average condition, summerrain belt and so on; 2) the simulation of arid climate of the western China, including thermal effect of the Qing-hai-Tibet Plateau, the plateau precipitation in the Qilian Mountains; and the impacts of greenhouse effects (CO2 dou-bling) upon climate in the western China; and 3) the simulation of the climate effect of underlying surface changes, in-cluding the effect of soil on climate formation, the influence of terrain on precipitation, the effect of regional soil deg-radation on regional climate, the effect of various underlying surfaces on regional climate, the effect of land-sea con-trast on the climate formulation, the influence of snow cover over the plateau regions on the regional climate, the effectof vegetation changes on the regional climate, etc. In the process of application of regional climate models, the prefer-ences of the models are improved so that better simulation results are gotten. At last, some suggestions are made aboutthe application of regional climate models in regional climate research in the future.

  9. MULTI-WAVELENGTH MODELLING OF DUSTY GALAXIES. GRASIL AND APPLICATIONS

    Directory of Open Access Journals (Sweden)

    L. Silva

    2009-01-01

    Full Text Available The spectral energy distribution of galaxies contains a convolved information on their stellar and gas content, on the star formation rate and history. It is therefore the most direct probe of galaxy properties. Each spectral range is mostly dominated by some specific emission sources or radiative processes so that only by modelling the whole spectral range it is possible to de-convolve and interpret the information contained in the SED in terms of SFR and galaxy evolution in general. The ingredients and kind of computations considered in models for the SEDs of galaxies depend on their aims. Theoretical models have the advantage of a broader interpretative and predictive power with respect to observationally calibrated semi-empirical approaches, the major drawback being a longer computational time. I summarize the main features of GRASIL, a code to compute the UV to radio SED of galaxies treating the radiative transfer and dust emission with particular care. It has been widely applied to interpret observations and to make predictions for semi-analytical galaxy formation models. I present in particular the applications in the context of galaxy models, and the new method implemented in GRASIL based on the artificial neural network algorithm to cope with the computing time for cosmological applications.

  10. [Application of multilevel models in the evaluation of bioequivalence (II).].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Li, Xiao-song; Chen, Feng; Yang, Min

    2010-03-01

    The main purpose of this paper is to explore the applicability of multivariate multilevel models for bioequivalence evaluation. Using an example of a 4 x 4 cross-over test design in evaluating bioequivalence of homemade and imported rosiglitazone maleate tablets, this paper illustrated the multivariate-model-based method for partitioning total variances of ln(AUC) and ln(C(max)) in the framework of multilevel models. It examined the feasibility of multivariate multilevel models in directly evaluating average bioequivalence (ABE), population bioequivalence (PBE) and individual bioequivalence (IBE). Taking into account the correlation between ln(AUC) and ln(C(max)) of rosiglitazone maleate tablets, the proposed models suggested no statistical difference between the two effect measures in their ABE bioequivalence via joint tests, whilst a contradictive conclusion was derived based on univariate multilevel models. Furthermore, the PBE and IBE for both ln(AUC) and ln(C(max)) of the two types of tablets were assessed with no statistical difference based on estimates of variance components from the proposed models. Multivariate multilevel models could be used to analyze bioequivalence of multiple effect measures simultaneously and they provided a new way of statistical analysis to evaluate bioequivalence.

  11. Receptor modeling application framework for particle source apportionment.

    Science.gov (United States)

    Watson, John G; Zhu, Tan; Chow, Judith C; Engelbrecht, Johann; Fujita, Eric M; Wilson, William E

    2002-12-01

    Receptor models infer contributions from particulate matter (PM) source types using multivariate measurements of particle chemical and physical properties. Receptor models complement source models that estimate concentrations from emissions inventories and transport meteorology. Enrichment factor, chemical mass balance, multiple linear regression, eigenvector. edge detection, neural network, aerosol evolution, and aerosol equilibrium models have all been used to solve particulate air quality problems, and more than 500 citations of their theory and application document these uses. While elements, ions, and carbons were often used to apportion TSP, PM10, and PM2.5 among many source types, many of these components have been reduced in source emissions such that more complex measurements of carbon fractions, specific organic compounds, single particle characteristics, and isotopic abundances now need to be measured in source and receptor samples. Compliance monitoring networks are not usually designed to obtain data for the observables, locations, and time periods that allow receptor models to be applied. Measurements from existing networks can be used to form conceptual models that allow the needed monitoring network to be optimized. The framework for using receptor models to solve air quality problems consists of: (1) formulating a conceptual model; (2) identifying potential sources; (3) characterizing source emissions; (4) obtaining and analyzing ambient PM samples for major components and source markers; (5) confirming source types with multivariate receptor models; (6) quantifying source contributions with the chemical mass balance; (7) estimating profile changes and the limiting precursor gases for secondary aerosols; and (8) reconciling receptor modeling results with source models, emissions inventories, and receptor data analyses.

  12. A double continuum hydrological model for glacier applications

    Science.gov (United States)

    de Fleurian, B.; Gagliardini, O.; Zwinger, T.; Durand, G.; Le Meur, E.; Mair, D.; Råback, P.

    2014-01-01

    The flow of glaciers and ice streams is strongly influenced by the presence of water at the interface between ice and bed. In this paper, a hydrological model evaluating the subglacial water pressure is developed with the final aim of estimating the sliding velocities of glaciers. The global model fully couples the subglacial hydrology and the ice dynamics through a water-dependent friction law. The hydrological part of the model follows a double continuum approach which relies on the use of porous layers to compute water heads in inefficient and efficient drainage systems. This method has the advantage of a relatively low computational cost that would allow its application to large ice bodies such as Greenland or Antarctica ice streams. The hydrological model has been implemented in the finite element code Elmer/Ice, which simultaneously computes the ice flow. Herein, we present an application to the Haut Glacier d'Arolla for which we have a large number of observations, making it well suited to the purpose of validating both the hydrology and ice flow model components. The selection of hydrological, under-determined parameters from a wide range of values is guided by comparison of the model results with available glacier observations. Once this selection has been performed, the coupling between subglacial hydrology and ice dynamics is undertaken throughout a melt season. Results indicate that this new modelling approach for subglacial hydrology is able to reproduce the broad temporal and spatial patterns of the observed subglacial hydrological system. Furthermore, the coupling with the ice dynamics shows good agreement with the observed spring speed-up.

  13. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    Science.gov (United States)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  14. WRF Model Methodology for Offshore Wind Energy Applications

    Directory of Open Access Journals (Sweden)

    Evangelia-Maria Giannakopoulou

    2014-01-01

    Full Text Available Among the parameters that must be considered for an offshore wind farm development, the stability conditions of the marine atmospheric boundary layer (MABL are of significant importance. Atmospheric stability is a vital parameter in wind resource assessment (WRA due to its direct relation to wind and turbulence profiles. A better understanding of the stability conditions occurring offshore and of the interaction between MABL and wind turbines is needed. Accurate simulations of the offshore wind and stability conditions using mesoscale modelling techniques can lead to a more precise WRA. However, the use of any mesoscale model for wind energy applications requires a proper validation process to understand the accuracy and limitations of the model. For this validation process, the weather research and forecasting (WRF model has been applied over the North Sea during March 2005. The sensitivity of the WRF model performance to the use of different horizontal resolutions, input datasets, PBL parameterisations, and nesting options was examined. Comparison of the model results with other modelling studies and with high quality observations recorded at the offshore measurement platform FINO1 showed that the ERA-Interim reanalysis data in combination with the 2.5-level MYNN PBL scheme satisfactorily simulate the MABL over the North Sea.

  15. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  16. Numerical modelling of channel migration with application to laboratory rivers

    Institute of Scientific and Technical Information of China (English)

    Jian SUN; Bin-liang LIN; Hong-wei KUANG

    2015-01-01

    The paper presents the development of a morphological model and its application to experimental model rivers. The model takes into account the key processes of channel migration, including bed deformation, bank failure and wetting and drying. Secondary flows in bends play an important role in lateral sediment transport, which further affects channel migration. A new formula has been derived to predict the near-bed secondary flow speed, in which the magnitude of the speed is linked to the lateral water level gradient. Since only non-cohesive sediment is considered in the current study, the bank failure is modelled based on the concept of submerged angle of repose. The wetting and drying process is modelled using an existing method. Comparisons between the numerical model predictions and experimental observations for various discharges have been made. It is found that the model predicted channel planform and cross-sectional shapes agree generally well with the laboratory observations. A scenario analysis is also carried out to investigate the impact of secondary flow on the channel migration process. It shows that if the effect of secondary flow is ignored, the channel size in the lateral direction will be seriously underestimated.

  17. 4Mx Soil-Plant Model: Applications, Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Nándor Fodor

    2012-12-01

    Full Text Available Crop simulation models describe the main processes of the soil-plant system in a dynamic way usually in a daily time-step. With the help of these models we may monitor the soil- and plant-related processes of the simulated system as they evolve according to the atmospheric and environmental conditions. Crop models could be successfully applied in the following areas: (1 Education: by promoting the system-oriented thinking a comprehensive overview of the interrelations of the soil-plant system as well as of the environmental protection related aspects of the human activities could be presented. (2 Research: The results of observations as well as of experiments could be extrapolated in time and space, thus, for example, the possible effects of the global climate change could be estimated. (3 Practice: Model calculations could be used in intelligent irrigation control and decision supporting systems as well as for providing scientific background for policy makers. The most spectacular feature of the 4Mx crop model is that its graphical user interface enables the user to alter not only the parameters of the model but the function types of its governing equations as well. The applicability of the 4Mx model is presented via several case-studies.

  18. A realistic dynamic blower energy consumption model for wastewater applications.

    Science.gov (United States)

    Amerlinck, Y; De Keyser, W; Urchegui, G; Nopens, I

    2016-10-01

    At wastewater treatment plants (WWTPs) aeration is the largest energy consumer. This high energy consumption requires an accurate assessment in view of plant optimization. Despite the ever increasing detail in process models, models for energy production still lack detail to enable a global optimization of WWTPs. A new dynamic model for a more accurate prediction of aeration energy costs in activated sludge systems, equipped with submerged air distributing diffusers (producing coarse or fine bubbles) connected via piping to blowers, has been developed and demonstrated. This paper addresses the model structure, its calibration and application to the WWTP of Mekolalde (Spain). The new model proved to give an accurate prediction of the real energy consumption by the blowers and captures the trends better than the constant average power consumption models currently being used. This enhanced prediction of energy peak demand, which dominates the price setting of energy, illustrates that the dynamic model is preferably used in multi-criteria optimization exercises for minimizing the energy consumption.

  19. Reliability models applicable to space telescope solar array assembly system

    Science.gov (United States)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  20. Current developments in soil organic matter modeling and the expansion of model applications: a review

    Science.gov (United States)

    Campbell, Eleanor E.; Paustian, Keith

    2015-12-01

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions.

  1. Towards Industrial Application of Damage Models for Sheet Metal Forming

    Science.gov (United States)

    Doig, M.; Roll, K.

    2011-05-01

    Due to global warming and financial situation the demand to reduce the CO2-emission and the production costs leads to the permanent development of new materials. In the automotive industry the occupant safety is an additional condition. Bringing these arguments together the preferable approach for lightweight design of car components, especially for body-in-white, is the use of modern steels. Such steel grades, also called advanced high strength steels (AHSS), exhibit a high strength as well as a high formability. Not only their material behavior but also the damage behavior of AHSS is different compared to the performances of standard steels. Conventional methods for the damage prediction in the industry like the forming limit curve (FLC) are not reliable for AHSS. Physically based damage models are often used in crash and bulk forming simulations. The still open question is the industrial application of these models for sheet metal forming. This paper evaluates the Gurson-Tvergaard-Needleman (GTN) model and the model of Lemaitre within commercial codes with a goal of industrial application.

  2. Memcapacitor model and its application in a chaotic oscillator

    Science.gov (United States)

    Guang-Yi, Wang; Bo-Zhen, Cai; Pei-Pei, Jin; Ti-Ling, Hu

    2016-01-01

    A memcapacitor is a new type of memory capacitor. Before the advent of practical memcapacitor, the prospective studies on its models and potential applications are of importance. For this purpose, we establish a mathematical memcapacitor model and a corresponding circuit model. As a potential application, based on the model, a memcapacitor oscillator is designed, with its basic dynamic characteristics analyzed theoretically and experimentally. Some circuit variables such as charge, flux, and integral of charge, which are difficult to measure, are observed and measured via simulations and experiments. Analysis results show that besides the typical period-doubling bifurcations and period-3 windows, sustained chaos with constant Lyapunov exponents occurs. Moreover, this oscillator also exhibits abrupt chaos and some novel bifurcations. In addition, based on the digital signal processing (DSP) technology, a scheme of digitally realizing this memcapacitor oscillator is provided. Then the statistical properties of the chaotic sequences generated from the oscillator are tested by using the test suit of the National Institute of Standards and Technology (NIST). The tested randomness definitely reaches the standards of NIST, and is better than that of the well-known Lorenz system. Project supported by the National Natural Science Foundation of China (Grant Nos. 61271064, 61401134, and 60971046), the Natural Science Foundation of Zhejiang Province, China (Grant Nos. LZ12F01001 and LQ14F010008), and the Program for Zhejiang Leading Team of S&T Innovation, China (Grant No. 2010R50010).

  3. Application of model based control to robotic manipulators

    Science.gov (United States)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1988-01-01

    A robot that can duplicate humam motion capabilities in such activities as balancing, reaching, lifting, and moving has been built and tested. These capabilities are achieved through the use of real time Model-Based Control (MBC) techniques which have recently been demonstrated. MBC accounts for all manipulator inertial forces and provides stable manipulator motion control even at high speeds. To effectively demonstrate the unique capabilities of MBC, an experimental robotic manipulator was constructed, which stands upright, balancing on a two wheel base. The mathematical modeling of dynamics inherent in MBC permit the control system to perform functions that are impossible with conventional non-model based methods. These capabilities include: (1) Stable control at all speeds of operation; (2) Operations requiring dynamic stability such as balancing; (3) Detection and monitoring of applied forces without the use of load sensors; (4) Manipulator safing via detection of abnormal loads. The full potential of MBC has yet to be realized. The experiments performed for this research are only an indication of the potential applications. MBC has no inherent stability limitations and its range of applicability is limited only by the attainable sampling rate, modeling accuracy, and sensor resolution. Manipulators could be designed to operate at the highest speed mechanically attainable without being limited by control inadequacies. Manipulators capable of operating many times faster than current machines would certainly increase productivity for many tasks.

  4. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  5. Minimizing Drilling Thrust Force for HFRP Composite by Optimizing Process Parameters using Combination of ANOVA Approach and S/N Ratios Analysis

    Directory of Open Access Journals (Sweden)

    Maoinser Mohd Azuwan

    2014-07-01

    Full Text Available Drilling hybrid fiber reinforced polymer (HFRP composite is a novel approach in fiber reinforced polymer (FRP composite machining studies as this material combining two different fibers in a single matrix that resulted in considerable improvement in mechanical properties and cost saving as compared to conventional fiber composite material. This study presents the development and optimized way of drilling HFRP composite at various drilling parameters such as drill point angle, feed rate and cutting speed by using the full factorial design experiment with the combination of analysis of variance (ANOVA approach and signal to noise (S/N ratio analysis. The results identified optimum drilling parameters for drilling the HFRP composite using small drill point angle at low feed rate and medium cutting speed that resulted in lower thrust force.

  6. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  7. Modeling of shape memory alloys and application to porous materials

    Science.gov (United States)

    Panico, Michele

    In the last two decades the number of innovative applications for advanced materials has been rapidly increasing. Shape memory alloys (SMAs) are an exciting class of these materials which exhibit large reversible stresses and strains due to a thermoelastic phase transformation. SMAs have been employed in the biomedical field for producing cardiovascular stents, shape memory foams have been successfully tested as bone implant material, and SMAs are being used as deployable switches in aerospace applications. The behavior of shape memory alloys is intrinsically complex due to the coupling of phase transformation with thermomechanical loading, so it is critical for constitutive models to correctly simulate their response over a wide range of stress and temperature. In the first part of this dissertation, we propose a macroscopic phenomenological model for SMAs that is based on the classical framework of thermodynamics of irreversible processes and accounts for the effect of multiaxial stress states and non-proportional loading histories. The model is able to account for the evolution of both self-accommodated and oriented martensite. Moreover, reorientation of the product phase according to loading direction is specifically accounted for. Computational tests demonstrate the ability of the model to simulate the main aspects of the shape memory response in a one-dimensional setting and some of the features that have been experimentally found in the case of multi-axial non-proportional loading histories. In the second part of this dissertation, this constitutive model has been used to study the mesoscopic behavior of porous shape memory alloys with particular attention to the mechanical response under cyclic loading conditions. In order to perform numerical simulations, the model was implemented into the commercial finite element code ABAQUS. Due to stress concentrations in a porous microstructure, the constitutive law was enhanced to account for the development of

  8. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  9. Practical Application of Model Checking in Software Verification

    Science.gov (United States)

    Havelund, Klaus; Skakkebaek, Jens Ulrik

    1999-01-01

    This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.

  10. On the application of copula in modeling maintenance contract

    Science.gov (United States)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  11. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  12. Clinical application of the five-factor model.

    Science.gov (United States)

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning.

  13. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  14. Application of simplified model to sensitivity analysis of solidification process

    Directory of Open Access Journals (Sweden)

    R. Szopa

    2007-12-01

    Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.

  15. Tableting process optimisation with the application of fuzzy models.

    Science.gov (United States)

    Belic, Ales; Skrjanc, Igor; Bozic, Damjana Zupancic; Vrecer, Franc

    2010-04-15

    A quality-by-design (QbD) principle, including process analytical technology, is becoming the principal idea in drug development and manufacturing. The implementation of QbD into product development and manufacturing requires larger resources, both human and financial, however, large-scale production can be established in a more cost-effective manner and with improved product quality. The objective of the present work was to study the influence of particle size distribution in powder mixture for tableting, and the settings of the compression parameters on the tablet quality described by the capping coefficient, standard deviations of mass and crushing strength of compressed tablets. Fuzzy models were used for modelling of the effects of the particle size distribution and the tableting machine settings on the tablet quality. The results showed that the application of mathematical models, based on the contemporary routinely measured quantities, can significantly improve the trial-and-error procedures.

  16. Polycrystalline CVD diamond device level modeling for particle detection applications

    Science.gov (United States)

    Morozzi, A.; Passeri, D.; Kanxheri, K.; Servoli, L.; Lagomarsino, S.; Sciortino, S.

    2016-12-01

    Diamond is a promising material whose excellent physical properties foster its use for radiation detection applications, in particular in those hostile operating environments where the silicon-based detectors behavior is limited due to the high radiation fluence. Within this framework, the application of Technology Computer Aided Design (TCAD) simulation tools is highly envisaged for the study, the optimization and the predictive analysis of sensing devices. Since the novelty of using diamond in electronics, this material is not included in the library of commercial, state-of-the-art TCAD software tools. In this work, we propose the development, the application and the validation of numerical models to simulate the electrical behavior of polycrystalline (pc)CVD diamond conceived for diamond sensors for particle detection. The model focuses on the characterization of a physically-based pcCVD diamond bandgap taking into account deep-level defects acting as recombination centers and/or trap states. While a definite picture of the polycrystalline diamond band-gap is still debated, the effect of the main parameters (e.g. trap densities, capture cross-sections, etc.) can be deeply investigated thanks to the simulated approach. The charge collection efficiency due to β -particle irradiation of diamond materials provided by different vendors and with different electrode configurations has been selected as figure of merit for the model validation. The good agreement between measurements and simulation findings, keeping the traps density as the only one fitting parameter, assesses the suitability of the TCAD modeling approach as a predictive tool for the design and the optimization of diamond-based radiation detectors.

  17. An application-semantics-based relaxed transaction model for internetware

    Institute of Scientific and Technical Information of China (English)

    HUANG Tao; DING Xiaoning; WEI Jun

    2006-01-01

    An internetware application is composed by existing individual services, while transaction processing is a key mechanism to make the composition reliable. The existing research of transactional composite service (TCS) depends on the analysis to composition structure and exception handling mechanism in order to guarantee the relaxed atomicity.However, this approach cannot handle some application-specific requirements and causes lots of unnecessary failure recoveries or even aborts. In this paper, we propose a relaxed transaction model, including system mode, relaxed atomicity criterion, static checking algorithm and dynamic enforcement algorithm. Users are able to define different relaxed atomicity constraint for different TCS according to application-specific requirements, including acceptable configurations and the preference order. The checking algorithm determines whether the constraint can be guaranteed to be satisfied. The enforcement algorithm monitors the execution and performs transaction management work according to the constraint. Compared to the existing work, our approach can handle complex application requirements, avoid unnecessary failure recoveries and perform the transaction management work automatically.

  18. Graphical models and Bayesian domains in risk modelling: application in microbiological risk assessment.

    Science.gov (United States)

    Greiner, Matthias; Smid, Joost; Havelaar, Arie H; Müller-Graf, Christine

    2013-05-15

    Quantitative microbiological risk assessment (QMRA) models are used to reflect knowledge about complex real-world scenarios for the propagation of microbiological hazards along the feed and food chain. The aim is to provide insight into interdependencies among model parameters, typically with an interest to characterise the effect of risk mitigation measures. A particular requirement is to achieve clarity about the reliability of conclusions from the model in the presence of uncertainty. To this end, Monte Carlo (MC) simulation modelling has become a standard in so-called probabilistic risk assessment. In this paper, we elaborate on the application of Bayesian computational statistics in the context of QMRA. It is useful to explore the analogy between MC modelling and Bayesian inference (BI). This pertains in particular to the procedures for deriving prior distributions for model parameters. We illustrate using a simple example that the inability to cope with feedback among model parameters is a major limitation of MC modelling. However, BI models can be easily integrated into MC modelling to overcome this limitation. We refer a BI submodel integrated into a MC model to as a "Bayes domain". We also demonstrate that an entire QMRA model can be formulated as Bayesian graphical model (BGM) and discuss the advantages of this approach. Finally, we show example graphs of MC, BI and BGM models, highlighting the similarities among the three approaches.

  19. Numerical algorithm of distributed TOPKAPI model and its application

    Institute of Scientific and Technical Information of China (English)

    Deng Peng; Li Zhijia; Liu Zhiyu

    2008-01-01

    The TOPKAPI (TOPographic Kinematic APproximation and Integration) model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model's computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model) grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  20. Animal models of Parkinson's disease and their applications

    Directory of Open Access Journals (Sweden)

    Park HJ

    2016-07-01

    Full Text Available Hyun Jin Park, Ting Ting Zhao, Myung Koo LeeDepartment of Pharmacy, Research Center for Bioresource and Health, College of Pharmacy, Chungbuk National University, Cheongju, Republic of Korea Abstract: Parkinson's disease (PD is a progressive neurodegenerative disorder that occurs mainly due to the degeneration of dopaminergic neuronal cells in the substantia nigra. l-3,4-Dihydroxyphenylalanine (L-DOPA is the most effective known therapy for PD. However, chronic L-DOPA administration results in a loss of drug efficacy and irreversible adverse effects, including L-DOPA-induced dyskinesia, affective disorders, and cognitive function disorders. To study the motor and non-motor symptomatic dysfunctions in PD, neurotoxin and genetic animal models of PD have been widely applied. However, these animal models do not exhibit all of the pathophysiological symptoms of PD. Regardless, neurotoxin rat and mouse models of PD have been commonly used in the development of bioactive components from natural herbal medicines. Here, the main animal models of PD and their applications have been introduced in order to aid the development of therapeutic and adjuvant agents. Keywords: Parkinson's disease, neurotoxin animal models, genetic animal models, adjuvant therapeutics

  1. Large eddy simulation modelling of combustion for propulsion applications.

    Science.gov (United States)

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  2. Prognostic models in obstetrics: available, but far from applicable.

    Science.gov (United States)

    Kleinrouweler, C Emily; Cheong-See, Fiona M; Collins, Gary S; Kwee, Anneke; Thangaratinam, Shakila; Khan, Khalid S; Mol, Ben Willem J; Pajkrt, Eva; Moons, Karel G M; Schuit, Ewoud

    2016-01-01

    Health care provision is increasingly focused on the prediction of patients' individual risk for developing a particular health outcome in planning further tests and treatments. There has been a steady increase in the development and publication of prognostic models for various maternal and fetal outcomes in obstetrics. We undertook a systematic review to give an overview of the current status of available prognostic models in obstetrics in the context of their potential advantages and the process of developing and validating models. Important aspects to consider when assessing a prognostic model are discussed and recommendations on how to proceed on this within the obstetric domain are given. We searched MEDLINE (up to July 2012) for articles developing prognostic models in obstetrics. We identified 177 papers that reported the development of 263 prognostic models for 40 different outcomes. The most frequently predicted outcomes were preeclampsia (n = 69), preterm delivery (n = 63), mode of delivery (n = 22), gestational hypertension (n = 11), and small-for-gestational-age infants (n = 10). The performance of newer models was generally not better than that of older models predicting the same outcome. The most important measures of predictive accuracy (ie, a model's discrimination and calibration) were often (82.9%, 218/263) not both assessed. Very few developed models were validated in data other than the development data (8.7%, 23/263). Only two-thirds of the papers (62.4%, 164/263) presented the model such that validation in other populations was possible, and the clinical applicability was discussed in only 11.0% (29/263). The impact of developed models on clinical practice was unknown. We identified a large number of prognostic models in obstetrics, but there is relatively little evidence about their performance, impact, and usefulness in clinical practice so that at this point, clinical implementation cannot be recommended. New efforts should be directed

  3. Addressing Confounding in Predictive Models with an Application to Neuroimaging.

    Science.gov (United States)

    Linn, Kristin A; Gaonkar, Bilwaj; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-01

    Understanding structural changes in the brain that are caused by a particular disease is a major goal of neuroimaging research. Multivariate pattern analysis (MVPA) comprises a collection of tools that can be used to understand complex disease efxcfects across the brain. We discuss several important issues that must be considered when analyzing data from neuroimaging studies using MVPA. In particular, we focus on the consequences of confounding by non-imaging variables such as age and sex on the results of MVPA. After reviewing current practice to address confounding in neuroimaging studies, we propose an alternative approach based on inverse probability weighting. Although the proposed method is motivated by neuroimaging applications, it is broadly applicable to many problems in machine learning and predictive modeling. We demonstrate the advantages of our approach on simulated and real data examples.

  4. Application of product modelling - seen from a work preparation viewpoint

    DEFF Research Database (Denmark)

    Hvam, Lars

    Manufacturing companies spends an increasing amount of the total work resources in the manufacturing planning system with the activities of e.g. specifying products and methods, scheduling, procurement etc. By this the potential for obtaining increased productivity moves from the direct costs...... the specification work. The theoretical fundament of the project include four elements. The first element (work preparation) consider methods for analysing and preparing the direct work in the production, pointing to an analogy between analysing the direct work in the production and the work in the planning systems......, over building a model, and to the final programming of an application. It has been stressed out to carry out all the phases in the outline of procedure in the empirical work, one of the reasons being to prove that it is possible, with a reasonable consumption of resources, to build an application...

  5. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  6. Modeling Phosphorous Losses from Seasonal Manure Application Schemes

    Science.gov (United States)

    Menzies, E.; Walter, M. T.

    2015-12-01

    Excess nutrient loading, especially nitrogen and phosphorus, to surface waters is a common and significant problem throughout the United States. While pollution remediation efforts are continuously improving, the most effective treatment remains to limit the source. Appropriate timing of fertilizer application to reduce nutrient losses is currently a hotly debated topic in the Northeastern United States; winter spreading of manure is under special scrutiny. We plan to evaluate the loss of phosphorous to surface waters from agricultural systems under varying seasonal fertilization schemes in an effort to determine the impacts of fertilizers applied throughout the year. The Cayuga Lake basin, located in the Finger Lakes region of New York State, is a watershed dominated by agriculture where a wide array of land management strategies can be found. The evaluation will be conducted on the Fall Creek Watershed, a large sub basin in the Cayuga Lake Watershed. The Fall Creek Watershed covers approximately 33,000 ha in central New York State with approximately 50% of this land being used for agriculture. We plan to use the Soil and Water Assessment Tool (SWAT) to model a number of seasonal fertilization regimes such as summer only spreading and year round spreading (including winter applications), as well as others. We will use the model to quantify the phosphorous load to surface waters from these different fertilization schemes and determine the impacts of manure applied at different times throughout the year. More detailed knowledge about how seasonal fertilization schemes impact phosphorous losses will provide more information to stakeholders concerning the impacts of agriculture on surface water quality. Our results will help farmers and extensionists make more informed decisions about appropriate timing of manure application for reduced phosphorous losses and surface water degradation as well as aid law makers in improving policy surrounding manure application.

  7. Ecological niche modeling and its applications in biodiversity conservation

    Directory of Open Access Journals (Sweden)

    Gengping Zhu

    2013-01-01

    Full Text Available Based on the environmental variables that associated with species’ occurrence records, ecological niche modeling (ENM seeks to characterize environmental conditions suitable for a particular species and then identify where suitable environmental habitats are distributed in the space. Recently, ENM has been used increasingly in biological invasion, conservation biology, biological responses toclimate change, disease spatial transmission, and variety aspects of ecology and evolutionary biology research. However, the theoretical background of these applications is generally poorly understood, leading to artifactual conclusions in some studies (e.g. niche differentiation during species’ invasion. In this paper we discuss the relationship between niche and geographic distribution and introduce the theoretical basis of ENM, along with relationships between the niche and ENM. Abiotic/biotic, historical and dispersal factors are three key elements that determine species’ geographic distributions at different scales. By using environmental variables derived from distributional records, ENM is based on observations that already include effects of biotic interactions, therefore ENM is used to characterize somewhere between the realized niche and potential niche, not the fundamental niche. Grinnellian and Eltonian niches are both manifested in ENM calibration, depending on the types of variables used to fit model, the natural spatial scale at which they can be measured, and the dispersal of individuals throughout the environment. Applications of ENM in understanding ecological requirements of species, discovery of new species or populations, nature reserve design, predicting potential invasion, modeling biological responses to climate change, niche conservatism, and pecies delimitation are discussed in this paper.

  8. Application of Interval Predictor Models to Space Radiation Shielding

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  9. Efficient numerical modeling of the cornea, and applications

    Science.gov (United States)

    Gonzalez, L.; Navarro, Rafael M.; Hdez-Matamoros, J. L.

    2004-10-01

    Corneal topography has shown to be an essential tool in the ophthalmology clinic both in diagnosis and custom treatments (refractive surgery, keratoplastia), having also a strong potential in optometry. The post processing and analysis of corneal elevation, or local curvature data, is a necessary step to refine the data and also to extract relevant information for the clinician. In this context a parametric cornea model is proposed consisting of a surface described mathematically by two terms: one general ellipsoid corresponding to a regular base surface, expressed by a general quadric term located at an arbitrary position and free orientation in 3D space and a second term, described by a Zernike polynomial expansion, which accounts for irregularities and departures from the basic geometry. The model has been validated obtaining better adjustment of experimental data than other previous models. Among other potential applications, here we present the determination of the optical axis of the cornea by transforming the general quadric to its canonical form. This has permitted us to perform 3D registration of corneal topographical maps to improve the signal-to-noise ratio. Other basic and clinical applications are also explored.

  10. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  11. Bilayer Graphene Application on NO2 Sensor Modelling

    Directory of Open Access Journals (Sweden)

    Elnaz Akbari

    2014-01-01

    Full Text Available Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using graphene to achieve accuracy and higher speed in detection devices utilized in cases where gas sensing is required. Although there are plenty of experimental studies in this field, the lack of analytical models is felt deeply. To start with modelling, the field effect transistor- (FET- based structure has been chosen to serve as the platform and bilayer graphene density of state variation effect by NO2 injection has been discussed. The chemical reaction between graphene and gas creates new carriers in graphene which cause density changes and eventually cause changes in the carrier velocity. In the presence of NO2 gas, electrons are donated to the FET channel which is employed as a sensing mechanism. In order to evaluate the accuracy of the proposed models, the results obtained are compared with the existing experimental data and acceptable agreement is reported.

  12. The determination of the most applicable PWV model for Turkey

    Science.gov (United States)

    Deniz, Ilke; Gurbuz, Gokhan; Mekik, Cetin

    2016-07-01

    Water vapor is a key component for modelling atmosphere and climate studies. Moreover, long-term water vapor changes can be an independent source for detecting climate changes. Since Global Navigation Satellite Systems (GNSS) use microwaves passing through the atmosphere, atmospheric effects are modeled with high accuracy. Tropospheric effects on GNSS signals are estimated with total zenith delay parameter (ZTD) which is the sum of hydrostatic (ZHD) and wet zenith delay (ZWD). The first component can be obtained from meteorological observations with high accuracy; the second component, however, can be computed by subtracting ZHD from ZTD (ZWD=ZTD-ZHD). Afterwards, the weighted mean temperature (Tm) or the conversion factor (Q) is used for the conversion between the precipitable water vapor (PWV) and ZWD. The parameters Tm and Q are derived from the analysis of radiosonde stations' profile observations. Numerous Q and Tm models have been developed for each radiosonde station, radiosonde station group, countries and global fields such as Bevis Tm model and Emardson and Derks' Q models. So, PWV models (Tm and Q models) applied for Turkey have been developed using a year of radiosonde data (2011) from 8 radiosonde stations. In this study the models developed are tested by comparing PWVGNSS computed applying Tm and Q models to the ZTD estimates derived by Bernese and GAMIT/GLOBK software at GNSS stations established at Istanbul and Ankara with those from the collocated radiosonde stations (PWVRS) from October 2013 to December 2014 with the data obtained from a project (no 112Y350) supported by the Scientific and Technological Research Council of Turkey (TUBITAK). The comparison results show that PWVGNSS and PWVRS are in high correlation (86 % for Ankara and 90% for Istanbul). Thus, the most applicable model for Turkey and the accuracy of GNSS meteorology are investigated. In addition, Tm model was applied to the ZTD estimates of 20 TUSAGA-Active (CORS-TR) stations in

  13. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  14. Parametric model-order reduction for viscoelastic finite element models: an application to material parameter identification

    OpenAIRE

    van de Walle, Axel; Rouleau, Lucie; Deckers, Elke; Desmet, Wim

    2015-01-01

    In many engineering applications, viscoelastic treatments are used to suppress vibrations of lightly damped structures. Computational methods provide powerful tools for the design and analysis of these structures. The most commonly used method to model the dynamics of complex structures is the finite element method. Its use, however, often results in very large and computationally demanding models, especially when viscoelastic material behaviour has to be taken into account. To alleviate this...

  15. Ensemble hidden Markov models with application to landmine detection

    Science.gov (United States)

    Hamdi, Anis; Frigui, Hichem

    2015-12-01

    We introduce an ensemble learning method for temporal data that uses a mixture of hidden Markov models (HMM). We hypothesize that the data are generated by K models, each of which reflects a particular trend in the data. The proposed approach, called ensemble HMM (eHMM), is based on clustering within the log-likelihood space and has two main steps. First, one HMM is fit to each of the N individual training sequences. For each fitted model, we evaluate the log-likelihood of each sequence. This results in an N-by-N log-likelihood distance matrix that will be partitioned into K groups using a relational clustering algorithm. In the second step, we learn the parameters of one HMM per cluster. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum classification error (MCE), and the variational Bayesian (VB) training approaches. Finally, to test a new sequence, its likelihood is computed in all the models and a final confidence value is assigned by combining the models' outputs using an artificial neural network. We propose both discrete and continuous versions of the eHMM. Our approach was evaluated on a real-world application for landmine detection using ground-penetrating radar (GPR). Results show that both the continuous and discrete eHMM can identify meaningful and coherent HMM mixture components that describe different properties of the data. Each HMM mixture component models a group of data that share common attributes. These attributes are reflected in the mixture model's parameters. The results indicate that the proposed method outperforms the baseline HMM that uses one model for each class in the data.

  16. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  17. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  18. Cognitive interference modeling with applications in power and admission control

    KAUST Repository

    Mahmood, Nurul Huda

    2012-10-01

    One of the key design challenges in a cognitive radio network is controlling the interference generated at coexisting primary receivers. In order to design efficient cognitive radio systems and to minimize their unwanted consequences, it is therefore necessary to effectively control the secondary interference at the primary receivers. In this paper, a generalized framework for the interference analysis of a cognitive radio network where the different secondary transmitters may transmit with different powers and transmission probabilities, is presented and various applications of this interference model are demonstrated. The findings of the analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2012 IEEE.

  19. Numerical algorithm of distributed TOPKAPI model and its application

    Directory of Open Access Journals (Sweden)

    Deng Peng

    2008-12-01

    Full Text Available The TOPKAPI (TOPographic Kinematic APproximation and Integration model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model’s computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  20. SARX Model Application for Industrial Power Demand Forecasting in Brazil

    Directory of Open Access Journals (Sweden)

    Alessandra de Ávila Montini

    2012-06-01

    Full Text Available The objective of this paper is to propose the application of the SARX model to arrive at industrial power consumption forecasts in Brazil, which are critical to support decision-making in the energy sector, based on technical, economic and environmentally sustainable grounds. The proposed model has a seasonal component and considers the influence of exogenous variables on the projection of the dependent variable and utilizes an autoregressive process for residual modeling so as to improve its explanatory power. Five exogenous variables were included: industrial capacity utilization, industrial electricity tariff, industrial real revenues, exchange rate, and machinery and equipment inflation. In addition, the model assumed that power forecast was dependent on its own time lags and also on a dummy variable to reflect 2009 economic crisis. The study used 84 monthly observations, from January 2003 to December 2009. The backward method was used to select exogenous variables, assuming a 0.10 descriptive value. The results showed an adjusted coefficient of determination of 93.9% and all the estimated coefficients were statistically significant at a 0.10 descriptive level. Forecasts were also made from January to May 2010 at a 95% confidence interval, which included actual consumption values for this period. The SARX model has demonstrated an excellent performance for industrial power consumption forecasting in Brazil.

  1. Application of the ACASA model for urban development studies

    Science.gov (United States)

    Marras, S.; Pyles, R. D.; Falk, M.; Snyder, R. L.; Paw U, K. T.; Blecic, I.; Trunfio, G. A.; Cecchini, A.; Spano, D.

    2012-04-01

    Since urban population is growing fast and urban areas are recognized as the major source of CO2 emissions, more attention has being dedicated to the topic of urban sustainability and its connection with the climate. Urban flows of energy, water and carbon have an important impact on climate change and their quantification is pivotal in the city design and management. Large effort has been devoted to quantitative estimates of the urban metabolism components, and several advanced models have been developed and used at different spatial and temporal scales for this purpose. However, it is necessary to develop suitable tools and indicators to effectively support urban planning and management with the goal of achieving a more sustainable metabolism in the urban environment. In this study, the multilayer model ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) was chosen to simulate the exchanges of heat, water vapour and CO2 within and above urban canopy. After several calibration and evaluation tests over natural and agricultural ecosystems, the model was recently modified for application in urban and peri-urban areas. New equations to account for the anthropogenic contribution to heat exchange and carbon production, as well as key parameterizations of leaf-facet scale interactions to separate both biogenic and anthropogenic flux sources and sinks, were added to test changes in land use or urban planning strategies. The analysis was based on the evaluation of the ACASA model performance in estimating urban metabolism components at local scale. Simulated sensible heat, latent heat, and carbon fluxes were compared with in situ Eddy Covariance measurements collected in the city centre of Florence (Italy). Statistical analysis was performed to test the model accuracy and reliability. Model sensitivity to soil types and increased population density values was conducted to investigate the potential use of ACASA for evaluating the impact of planning alternative scenarios. In

  2. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  3. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  4. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  5. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  6. An Application of Finite Element Modelling to Pneumatic Artificial Muscle

    Directory of Open Access Journals (Sweden)

    R. Ramasamy

    2005-01-01

    Full Text Available The purpose of this article was to introduce and to give an overview of the Pneumatic Artificial Muscles (PAMs as a whole and to discuss its numerical modelling, using the Finite Element (FE Method. Thus, more information to understand on its behaviour in generating force for actuation was obtained. The construction of PAMs was mainly consists of flexible, inflatable membranes which having orthotropic material behaviour. The main properties influencing the PAMs will be explained in terms of their load-carrying capacity and low weight in assembly. Discussion on their designs and capacity to function as locomotion device in robotics applications will be laid out, followed by FE modelling to represent PAMs overall structural behaviour under any potential operational conditions.

  7. Application of Service Quality Model in Education Environment

    Directory of Open Access Journals (Sweden)

    Ting Ding Hooi

    2016-02-01

    Full Text Available Most of the ideas on service quality stem from the West. The massive developments in research in the West are undeniable of their importance. This leads to the generation and development of new ideas. These ideas were subsequently channeled to developing countries. Ideas obtained were then formulated and used by these developing countries in order to obtain better approach in channeling service quality. There are ample to be learnt from the service quality model, SERVQUAL which attain high acceptance in the West. Service quality in the education system is important to guarantee the effectiveness and quality of education. Effective and quality education will be able to offer quality graduates, which will contribute to the development of the nation. This paper will discuss the application of the SERVQUAL model into the education environment.

  8. Research and application of hierarchical model for multiple fault diagnosis

    Institute of Scientific and Technical Information of China (English)

    An Ruoming; Jiang Xingwei; Song Zhengji

    2005-01-01

    Computational complexity of complex system multiple fault diagnosis is a puzzle at all times. Based on the well-known Mozetic's approach, a novel hierarchical model-based diagnosis methodology is put forward for improving efficiency of multi-fault recognition and localization. Structural abstraction and weighted fault propagation graphs are combined to build diagnosis model. The graphs have weighted arcs with fault propagation probabilities and propagation strength. For solving the problem of coupled faults, two diagnosis strategies are used: one is the Lagrangian relaxation and the primal heuristic algorithms; another is the method of propagation strength. Finally, an applied example shows the applicability of the approach and experimental results are given to show the superiority of the presented technique.

  9. Chemical kinetic modeling of H{sub 2} applications

    Energy Technology Data Exchange (ETDEWEB)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-09-01

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamic computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.

  10. Land Surface Modeling Applications for Famine Early Warning

    Science.gov (United States)

    McNally, A.; Verdin, J. P.; Peters-Lidard, C. D.; Arsenault, K. R.; Wang, S.; Kumar, S.; Shukla, S.; Funk, C. C.; Pervez, M. S.; Fall, G. M.; Karsten, L. R.

    2015-12-01

    AGU 2015 Fall Meeting Session ID#: 7598 Remote Sensing Applications for Water Resources Management Land Surface Modeling Applications for Famine Early Warning James Verdin, USGS EROS Christa Peters-Lidard, NASA GSFC Amy McNally, NASA GSFC, UMD/ESSIC Kristi Arsenault, NASA GSFC, SAIC Shugong Wang, NASA GSFC, SAIC Sujay Kumar, NASA GSFC, SAIC Shrad Shukla, UCSB Chris Funk, USGS EROS Greg Fall, NOAA Logan Karsten, NOAA, UCAR Famine early warning has traditionally required close monitoring of agro-climatological conditions, putting them in historical context, and projecting them forward to anticipate end-of-season outcomes. In recent years, it has become necessary to factor in the effects of a changing climate as well. There has also been a growing appreciation of the linkage between food security and water availability. In 2009, Famine Early Warning Systems Network (FEWS NET) science partners began developing land surface modeling (LSM) applications to address these needs. With support from the NASA Applied Sciences Program, an instance of the Land Information System (LIS) was developed to specifically support FEWS NET. A simple crop water balance model (GeoWRSI) traditionally used by FEWS NET took its place alongside the Noah land surface model and the latest version of the Variable Infiltration Capacity (VIC) model, and LIS data readers were developed for FEWS NET precipitation forcings (NOAA's RFE and USGS/UCSB's CHIRPS). The resulting system was successfully used to monitor and project soil moisture conditions in the Horn of Africa, foretelling poor crop outcomes in the OND 2013 and MAM 2014 seasons. In parallel, NOAA created another instance of LIS to monitor snow water resources in Afghanistan, which are an early indicator of water availability for irrigation and crop production. These successes have been followed by investment in LSM implementations to track and project water availability in Sub-Saharan Africa and Yemen, work that is now underway. Adoption of

  11. Application of Model Predictive Control to BESS for Microgrid Control

    Directory of Open Access Journals (Sweden)

    Thai-Thanh Nguyen

    2015-08-01

    Full Text Available Battery energy storage systems (BESSs have been widely used for microgrid control. Generally, BESS control systems are based on proportional-integral (PI control techniques with the outer and inner control loops based on PI regulators. Recently, model predictive control (MPC has attracted attention for application to future energy processing and control systems because it can easily deal with multivariable cases, system constraints, and nonlinearities. This study considers the application of MPC-based BESSs to microgrid control. Two types of MPC are presented in this study: MPC based on predictive power control (PPC and MPC based on PI control in the outer and predictive current control (PCC in the inner control loops. In particular, the effective application of MPC for microgrids with multiple BESSs should be considered because of the differences in their control performance. In this study, microgrids with two BESSs based on two MPC techniques are considered as an example. The control performance of the MPC used for the control microgrid is compared to that of the PI control. The proposed control strategy is investigated through simulations using MATLAB/Simulink software. The simulation results show that the response time, power and voltage ripples, and frequency spectrum could be improved significantly by using MPC.

  12. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  13. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    Science.gov (United States)

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  14. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2016-02-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  15. Hamiltonian realization of power system dynamic models and its applications

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Power system is a typical energy system. Because Hamiltonian approaches are closely related to the energy of the physical system, they have been widely re-searched in recent years. The realization of the Hamiltonian structure of the nonlinear dynamic system is the basis for the application of the Hamiltonian methods. However, there have been no systematically investigations on the Ham-iltonian realization for different power system dynamic models so far. This paper researches the Hamiltonian realization in power systems dynamics. Starting from the widely used power system dynamic models, the paper reveals the intrinsic Hamiltonian structure of the nonlinear power system dynamics and also proposes approaches to formulate the power system Hamiltonian structure. Furthermore, this paper shows the application of the Hamiltonian structure of the power system dynamics to design non-smooth controller considering the nonlinear ceiling effects from the real physical limits. The general procedure to design controllers via the Hamiltonian structure is also summarized in the paper. The controller design based on the Hamiltonian structure is a completely nonlinear method and there is no lin-earization during the controller design process. Thus, the nonlinear characteristics of the dynamic system are completely kept and fully utilized.

  16. Hamiltonian realization of power system dynamic models and its applications

    Institute of Scientific and Technical Information of China (English)

    MA Jin; MEI ShengWei

    2008-01-01

    Power system is a typical energy system. Because Hamiltonian approaches are closely related to the energy of the physical system, they have been widely re-searched in recent years. The realization of the Hamiltonian structure of the nonlinear dynamic system is the basis for the application of the Hamiltonian methods. However, there have been no systematically investigations on the Ham-iltonian realization for different power system dynamic models so far. This paper researches the Hamiltonian realization in power systems dynamics. Starting from the widely used power system dynamic models, the paper reveals the intrinsic Hamiltonian structure of the nonlinear power system dynamics and also proposes approaches to formulate the power system Hamiltonian structure. Furthermore, this paper shows the application of the Hemiltonian structure of the power system dynamics to design non-smooth controller considering the nonlinear ceiling effects from the real physical limits. The general procedure to design controllers via the Hamiltonian structure is also summarized in the paper. The controller design based on the Hamiltonian structure is a completely nonlinear method and there is no lin-earization during the controller design process. Thus, the nonlinear characteristics of the dynamic system are completely kept and fully utilized.

  17. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  18. Applicative limitations of sediment transport on predictive modeling in geomorphology

    Institute of Scientific and Technical Information of China (English)

    WEIXiang; LIZhanbin

    2004-01-01

    Sources of uncertainty or error that arise in attempting to scale up the results of laboratory-scale sediment transport studies for predictive modeling of geomorphic systems include: (i) model imperfection, (ii) omission of important processes, (iii) lack of knowledge of initial conditions, (iv) sensitivity to initial conditions, (v) unresolved heterogeneity, (vi) occurrence of external forcing, and (vii) inapplicability of the factor of safety concept. Sources of uncertainty that are unimportant or that can be controlled at small scales and over short time scales become important in large-scale applications and over long time scales. Control and repeatability, hallmarks of laboratory-scale experiments, are usually lacking at the large scales characteristic of geomorphology. Heterogeneity is an important concomitant of size, and tends to make large systems unique. Uniqueness implies that prediction cannot be based upon first-principles quantitative modeling alone, but must be a function of system history as well. Periodic data collection, feedback, and model updating are essential where site-specific prediction is required.

  19. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    Ali Aytek; M Asce; Murat Alp

    2008-04-01

    This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the evolutionary computation (EC). Two different ANN techniques, the feed forward back propagation (FFBP) and generalized regression neural network (GRNN) methods are compared with one EC method, Gene Expression Programming (GEP) which is a new evolutionary algorithm that evolves computer programs. The daily hydrometeorological data of three rainfall stations and one streamflow station for Juniata River Basin in Pennsylvania state of USA are taken into consideration in the model development. Statistical parameters such as average, standard deviation, coefficient of variation, skewness, minimum and maximum values, as well as criteria such as mean square error (MSE) and determination coefficient (2) are used to measure the performance of the models. The results indicate that the proposed genetic programming (GP) formulation performs quite well compared to results obtained by ANNs and is quite practical for use. It is concluded from the results that GEP can be proposed as an alternative to ANN models.

  20. Global Modeling of CO2 Discharges with Aerospace Applications

    Directory of Open Access Journals (Sweden)

    Chloe Berenguer

    2014-01-01

    Full Text Available We developed a global model aiming to study discharges in CO2 under various conditions, pertaining to a large spectrum of pressure, absorbed energy, and feeding values. Various physical conditions and form factors have been investigated. The model was applied to a case of radiofrequency discharge and to helicon type devices functioning in low and high feed conditions. In general, main charged species were found to be CO2+ for sufficiently low pressure cases and O− for higher pressure ones, followed by CO2+, CO+, and O2+ in the latter case. Dominant reaction is dissociation of CO2 resulting into CO production. Electronegativity, important for radiofrequency discharges, increases with pressure, arriving up to 3 for high flow rates for absorbed power of 250 W, and diminishes with increasing absorbed power. Model results pertaining to radiofrequency type plasma discharges are found in satisfactory agreement with those available from an existing experiment. Application to low and high flow rates feedings cases of helicon thruster allowed for evaluation of thruster functioning conditions pertaining to absorbed powers from 50 W to 1.8 kW. The model allows for a detailed evaluation of the CO2 potential to be used as propellant in electric propulsion devices.

  1. Improving automation standards via semantic modelling: Application to ISA88.

    Science.gov (United States)

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method.

  2. Validation of elastic cross section models for space radiation applications

    Science.gov (United States)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Ford, W. P.; Maung, K. M.

    2017-02-01

    The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.

  3. Improving of ANOVA estimation in mixed linear models%线性混合模型中方差分量的ANOVA估计的改进

    Institute of Scientific and Technical Information of China (English)

    范永辉; 王松桂

    2007-01-01

    讨论了在含三个方差分量的线性混合模型中,在均方误差意义下,方差分量的方差分析估计的改进,并把这一结果推广到一般的线性混合模型上,得到一个改进方差分析估计的简单方法.

  4. Development and application of coarse-grained models for lipids

    Science.gov (United States)

    Cui, Qiang

    2013-03-01

    I'll discuss a number of topics that represent our efforts in developing reliable molecular models for describing chemical and physical processes involving biomembranes. This is an exciting yet challenging research area because of the multiple length and time scales that are present in the relevant problems. Accordingly, we attempt to (1) understand the value and limitation of popular coarse-grained (CG) models for lipid membranes with either a particle or continuum representation; (2) develop new CG models that are appropriate for the particular problem of interest. As specific examples, I'll discuss (1) a comparison of atomistic, MARTINI (a particle based CG model) and continuum descriptions of a membrane fusion pore; (2) the development of a modified MARTINI model (BMW-MARTINI) that features a reliable description of membrane/water interfacial electrostatics and its application to cell-penetration peptides and membrane-bending proteins. Motivated specifically by the recent studies of Wong and co-workers, we compare the self-assembly behaviors of lipids with cationic peptides that include either Arg residues or a combination of Lys and hydrophobic residues; in particular, we attempt to reveal factors that stabilize the cubic ``double diamond'' Pn3m phase over the inverted hexagonal HII phase. For example, to explicitly test the importance of the bidentate hydrogen-bonding capability of Arg to the stabilization of negative Gaussian curvature, we also compare results using variants of the BMW-MARTINI model that treat the side chain of Arg with different levels of details. Collectively, the results suggest that both the bidentate feature of Arg and the overall electrostatic properties of cationic peptides are important to the self-assembly behavior of these peptides with lipids. The results are expected to have general implications to the mechanism of peptides and proteins that stimulate pore formation in biomembranes. Work in collaboration with Zhe Wu, Leili Zhang

  5. An oral multispecies biofilm model for high content screening applications

    Science.gov (United States)

    Kommerein, Nadine; Stumpp, Sascha N.; Müsken, Mathias; Ehlert, Nina; Winkel, Andreas; Häussler, Susanne; Behrens, Peter; Buettner, Falk F. R.; Stiesch, Meike

    2017-01-01

    Peri-implantitis caused by multispecies biofilms is a major complication in dental implant treatment. The bacterial infection surrounding dental implants can lead to bone loss and, in turn, to implant failure. A promising strategy to prevent these common complications is the development of implant surfaces that inhibit biofilm development. A reproducible and easy-to-use biofilm model as a test system for large scale screening of new implant surfaces with putative antibacterial potency is therefore of major importance. In the present study, we developed a highly reproducible in vitro four-species biofilm model consisting of the highly relevant oral bacterial species Streptococcus oralis, Actinomyces naeslundii, Veillonella dispar and Porphyromonas gingivalis. The application of live/dead staining, quantitative real time PCR (qRT-PCR), scanning electron microscopy (SEM) and urea-NaCl fluorescence in situ hybridization (urea-NaCl-FISH) revealed that the four-species biofilm community is robust in terms of biovolume, live/dead distribution and individual species distribution over time. The biofilm community is dominated by S. oralis, followed by V. dispar, A. naeslundii and P. gingivalis. The percentage distribution in this model closely reflects the situation in early native plaques and is therefore well suited as an in vitro model test system. Furthermore, despite its nearly native composition, the multispecies model does not depend on nutrient additives, such as native human saliva or serum, and is an inexpensive, easy to handle and highly reproducible alternative to the available model systems. The 96-well plate format enables high content screening for optimized implant surfaces impeding biofilm formation or the testing of multiple antimicrobial treatment strategies to fight multispecies biofilm infections, both exemplary proven in the manuscript. PMID:28296966

  6. Performance Comparison of Two Meta-Model for the Application to Finite Element Model Updating of Structures

    Institute of Scientific and Technical Information of China (English)

    Yang Liu; DeJun Wang; Jun Ma; Yang Li

    2014-01-01

    To investigate the application of meta-model for finite element ( FE) model updating of structures, the performance of two popular meta-model, i.e., Kriging model and response surface model (RSM), were compared in detail. Firstly, above two kinds of meta-model were introduced briefly. Secondly, some key issues of the application of meta-model to FE model updating of structures were proposed and discussed, and then some advices were presented in order to select a reasonable meta-model for the purpose of updating the FE model of structures. Finally, the procedure of FE model updating based on meta-model was implemented by updating the FE model of a truss bridge model with the measured modal parameters. The results showed that the Kriging model was more proper for FE model updating of complex structures.

  7. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  8. A review of modeling applications using ROMS model and COAWST system in the Adriatic sea region

    CERN Document Server

    Carniel, Sandro

    2013-01-01

    From the first implementation in its purely hydrodynamic configuration, to the last configuration under the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) system, several specific modelling applications of the Regional Ocean Modelling Systems (ROMS, www.myroms.org) have been put forward within the Adriatic Sea (Italy) region. Covering now a wide range of spatial and temporal scales, they developed in a growing number of fields supporting Integrated Coastal Zone Management (ICZM) and Marine Spatial Planning (MSP) activities in this semi-enclosed sea of paramount importance including the Gulf of Venice. Presently, a ROMS operational implementation provides every day hydrodynamic and sea level 3-days forecasts, while a second one models the most relevant biogeochemical properties, and a third one (two-way coupled with the Simulating Waves Nearshore (SWAN) model) deals with extreme waves forecast. Such operational models provide support to civil and environmental protection activities (e.g., driving su...

  9. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  10. Soil erosion by water - model concepts and application

    Science.gov (United States)

    Schmidt, Juergen

    2010-05-01

    approaches will be discussed taking account of the models WEPP, EUROSEM, IISEM and EROSION 3D. In order to provide a better representation of spatially heterogeneous catchments in terms of landuse, soil, slope, and rainfall most of recently developed models operate on a grid-cell basis or other kinds of sub-units, each having uniform characteristics. These so-called "Distributed Models" accepts inputs from raster based geographic information system (GIS). The cell-based structure of the models also allows to generate drainage paths by which water and sediment can be routed from the top to the bottom of the respective watershed. One of the open problems in soil erosion modelling refers to the spontaneous generation of erosion rills without the need for pre-existing morphological contours. A promising approach to handle this problem was realized first in the RILLGROW model, which uses a cellular automaton system in order to generate realistic rill patterns. With respect to the above mentioned models selected applications will be presented and discussed regarding their usability for soil and water conservation purposes.

  11. GSTARS computer models and their applications, part I: theoretical development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two-dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3.

  12. Com aplicar les proves paramètriques bivariades t de Student i ANOVA en SPSS. Cas pràctic

    Directory of Open Access Journals (Sweden)

    María-José Rubio-Hurtado

    2012-07-01

    Full Text Available Les proves paramètriques són un tipus de proves de significació estadística que quantifiquen l'associació o independència entre una variable quantitativa i una categòrica. Les proves paramètriques són exigents amb certs requisits previs per a la seva aplicació: la distribució Normal de la variable quantitativa en els grups que es comparen, l'homogeneïtat de variàncies en les poblacions de les quals procedeixen els grups i una n mostral no inferior a 30. El seu no compliment comporta la necessitat de recórrer a proves estadístiques no paramètriques. Les proves paramètriques es classifiquen en dos: prova t (per a una mostra o per a dues mostres relacionades o independents i prova ANOVA (per a més de dues mostres independents.

  13. Comparison between ANOVA estimator and SD estimator under balanced data%平衡数据结构下ANOVA估计和SD估计的比较

    Institute of Scientific and Technical Information of China (English)

    吴密霞; 孙兵

    2013-01-01

    在线性混合效应模型下,方差分析(ANOVA)估计和谱分解(SD)估计对构造精确检验和广义P-值枢轴量起着非常重要的作用.尽管这两估计分别基于不同的方法,但它们共享许多类似的优点,如无偏性和有精确的表达式等.本文借助于已得到的协方差阵的谱分解结果,揭示了平衡数据一般线性混合效应模型下ANOVA估计与SD估计的关系,并分别针对协方差阵两种结构:套结构和多项分类随机效应结构,给出了ANOVA估计与SD估计等价的充分必要条件.

  14. Three essays on multi-level optimization models and applications

    Science.gov (United States)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  15. Testing simulation and structural models with applications to energy demand

    Science.gov (United States)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality

  16. Simulation Modeling in Plant Breeding: Principles and Applications

    Institute of Scientific and Technical Information of China (English)

    WANG Jian-kang; Wolfgang H Pfeiffer

    2007-01-01

    Conventional plant breeding largely depends on phenotypic selection and breeder's experience, therefore the breeding efficiency is low and the predictions are inaccurate. Along with the fast development in molecular biology and biotechnology, a large amount of biological data is available for genetic studies of important breeding traits in plants,which in turn allows the conduction of genotypic selection in the breeding process. However, gene information has not been effectively used in crop improvement because of the lack of appropriate tools. The simulation approach can utilize the vast and diverse genetic information, predict the cross performance, and compare different selection methods. Thus,the best performing crosses and effective breeding strategies can be identified. QuLine is a computer tool capable of defining a range, from simple to complex genetic models, and simulating breeding processes for developing final advanced lines. On the basis of the results from simulation experiments, breeders can optimize their breeding methodology and greatly improve the breeding efficiency. In this article, the underlying principles of simulation modeling in crop enhancement is initially introduced, following which several applications of QuLine are summarized, by comparing the different selection strategies, the precision parental selection, using known gene information, and the design approach in breeding. Breeding simulation allows the definition of complicated genetic models consisting of multiple alleles, pleiotropy, epistasis, and genes, by environment interaction, and provides a useful tool for breeders, to efficiently use the wide spectrum of genetic data and information available.

  17. Application of Molecular Modeling to Urokinase Inhibitors Development

    Directory of Open Access Journals (Sweden)

    V. B. Sulimov

    2014-01-01

    Full Text Available Urokinase-type plasminogen activator (uPA plays an important role in the regulation of diverse physiologic and pathologic processes. Experimental research has shown that elevated uPA expression is associated with cancer progression, metastasis, and shortened survival in patients, whereas suppression of proteolytic activity of uPA leads to evident decrease of metastasis. Therefore, uPA has been considered as a promising molecular target for development of anticancer drugs. The present study sets out to develop the new selective uPA inhibitors using computer-aided structural based drug design methods. Investigation involves the following stages: computer modeling of the protein active site, development and validation of computer molecular modeling methods: docking (SOL program, postprocessing (DISCORE program, direct generalized docking (FLM program, and the application of the quantum chemical calculations (MOPAC package, search of uPA inhibitors among molecules from databases of ready-made compounds to find new uPA inhibitors, and design of new chemical structures and their optimization and experimental examination. On the basis of known uPA inhibitors and modeling results, 18 new compounds have been designed, calculated using programs mentioned above, synthesized, and tested in vitro. Eight of them display inhibitory activity and two of them display activity about 10 μM.

  18. Parallel computer processing and modeling: applications for the ICU

    Science.gov (United States)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  19. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  20. Can We Trust Computational Modeling for Medical Applications?

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry

    2015-01-01

    Operations in extreme environments such as spaceflight pose human health risks that are currently not well understood and potentially unanticipated. In addition, there are limited clinical and research data to inform development and implementation of therapeutics for these unique health risks. In this light, NASA's Human Research Program (HRP) is leveraging biomedical computational models and simulations (M&S) to help inform, predict, assess and mitigate spaceflight health and performance risks, and enhance countermeasure development. To ensure that these M&S can be applied with confidence to the space environment, it is imperative to incorporate a rigorous verification, validation and credibility assessment (VV&C) processes to ensure that the computational tools are sufficiently reliable to answer questions within their intended use domain. In this presentation, we will discuss how NASA's Integrated Medical Model (IMM) and Digital Astronaut Project (DAP) have successfully adapted NASA's Standard for Models and Simulations, NASA-STD-7009 (7009) to achieve this goal. These VV&C methods are also being leveraged by organization such as the Food and Drug Administration (FDA), National Institute of Health (NIH) and the American Society of Mechanical Engineers (ASME) to establish new M&S VV&C standards and guidelines for healthcare applications. Similarly, we hope to provide some insight to the greater aerospace medicine community on how to develop and implement M&S with sufficient confidence to augment medical research and operations.

  1. Computational multiscale modeling of fluids and solids theory and applications

    CERN Document Server

    Steinhauser, Martin Oliver

    2017-01-01

    The idea of the book is to provide a comprehensive overview of computational physics methods and techniques, that are used for materials modeling on different length and time scales. Each chapter first provides an overview of the basic physical principles which are the basis for the numerical and mathematical modeling on the respective length-scale. The book includes the micro-scale, the meso-scale and the macro-scale, and the chapters follow this classification. The book explains in detail many tricks of the trade of some of the most important methods and techniques that are used to simulate materials on the perspective levels of spatial and temporal resolution. Case studies are included to further illustrate some methods or theoretical considerations. Example applications for all techniques are provided, some of which are from the author’s own contributions to some of the research areas. The second edition has been expanded by new sections in computational models on meso/macroscopic scales for ocean and a...

  2. Application of WEAP Simulation Model to Hengshui City Water Planning

    Institute of Scientific and Technical Information of China (English)

    OJEKUNLE Z O; ZHAO Lin; LI Manzhou; YANG Zhen; TAN Xin

    2007-01-01

    Like many river basins in China, water resources in the Fudong Pai River are almost fully allocated. This paper seeks to assess and evaluate water resource problems using water evaluation and planning (WEAP) model via its application to Hengshui Basin of Fudong Pai River. This model allows the simulation and analysis of various water allocation scenarios and, above all, scenarios of users' behavior. Water demand management is one of the options discussed in detail. Simulations are proposed for diverse climatic situations from dry years to normal years and results are discussed. Within the limits of data availability, it appears that most water users are not able to meet all their requirements from the river, and that even the ecological reserve will not be fully met during certain years. But the adoption of water demand management procedures offers opportunities for remedying this situation during normal hydrological years. However, it appears that demand management alone will not suffice during dry years. Nevertheless, the ease of use of the model and its user-friendly interfaces make it particularly useful for discussions and dialogue on water resources management among stakeholders.

  3. HEISHI: A fuel performance model for space nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.F.

    1994-08-01

    HEISHI is a Fortran computer model designed to aid in analysis, prediction, and optimization of fuel characteristics for use in Space Nuclear Thermal Propulsion (SNTP). Calculational results include fission product release rate, fuel failure fraction, mode of fuel failure, stress-strain state, and fuel material morphology. HEISHI contains models for decay chain calculations of retained and released fission products, based on an input power history and release coefficients. Decay chain parameters such as direct fission yield, decay rates, and branching fractions are obtained from a database. HEISHI also contains models for stress-strain behavior of multilayered fuel particles with creep and differential thermal expansion effects, transient particle temperature profile, grain growth, and fuel particle failure fraction. Grain growth is treated as a function of temperature; the failure fraction depends on the coating tensile strength, which in turn is a function of grain size. The HEISHI code is intended for use in analysis of coated fuel particles for use in particle bed reactors; however, much of the code is geometry-independent and applicable to fuel geometries other than spherical.

  4. A reformer performance model for fuel cell applications

    Science.gov (United States)

    Sandhu, S. S.; Saif, Y. A.; Fellner, J. P.

    A performance model for a reformer, consisting of the catalytic partial oxidation (CPO), high- and low-temperature water-gas shift (HTWGS and LTWGS), and preferential oxidation (PROX) reactors, has been formulated. The model predicts the composition and temperature of the hydrogen-rich reformed fuel-gas mixture needed for the fuel cell applications. The mathematical model equations, based on the principles of classical thermodynamics and chemical kinetics, were implemented into a computer program. The resulting software was employed to calculate the chemical species molar flow rates and the gas mixture stream temperature for the steady-state operation of the reformer. Typical computed results, such as the gas mixture temperature at the CPO reactor exit and the profiles of the fractional conversion of carbon monoxide, temperature, and mole fractions of the chemical species as a function of the catalyst weight in the HTWGS, LTWGS, and PROX reactors, are here presented at the carbon-to-oxygen atom ratio (C/O) of 1 for the feed mixture of n-decane (fuel) and dry air (oxidant).

  5. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    Directory of Open Access Journals (Sweden)

    Melike Bildirici

    2014-01-01

    Full Text Available The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100. Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray’s MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray’s MS-GARCH model. Therefore, the models are promising for various economic applications.

  6. Low Dimensional Semiconductor Structures Characterization, Modeling and Applications

    CERN Document Server

    Horing, Norman

    2013-01-01

    Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included h...

  7. Applications and limitations of in silico models in drug discovery.

    Science.gov (United States)

    Sacan, Ahmet; Ekins, Sean; Kortagere, Sandhya

    2012-01-01

    Drug discovery in the late twentieth and early twenty-first century has witnessed a myriad of changes that were adopted to predict whether a compound is likely to be successful, or conversely enable identification of molecules with liabilities as early as possible. These changes include integration of in silico strategies for lead design and optimization that perform complementary roles to that of the traditional in vitro and in vivo approaches. The in silico models are facilitated by the availability of large datasets associated with high-throughput screening, bioinformatics algorithms to mine and annotate the data from a target perspective, and chemoinformatics methods to integrate chemistry methods into lead design process. This chapter highlights the applications of some of these methods and their limitations. We hope this serves as an introduction to in silico drug discovery.

  8. A Model of Cloud Based Application Environment for Software Testing

    CERN Document Server

    Vengattaraman, T; Baskaran, R

    2010-01-01

    Cloud computing is an emerging platform of service computing designed for swift and dynamic delivery of assured computing resources. Cloud computing provide Service-Level Agreements (SLAs) for guaranteed uptime availability for enabling convenient and on-demand network access to the distributed and shared computing resources. Though the cloud computing paradigm holds its potential status in the field of distributed computing, cloud platforms are not yet to the attention of majority of the researchers and practitioners. More specifically, still the researchers and practitioners community has fragmented and imperfect knowledge on cloud computing principles and techniques. In this context, one of the primary motivations of the work presented in this paper is to reveal the versatile merits of cloud computing paradigm and hence the objective of this work is defined to bring out the remarkable significances of cloud computing paradigm through an application environment. In this work, a cloud computing model for sof...

  9. Modelling the application of integrated photonic spectrographs to astronomy

    CERN Document Server

    Harris, R J

    2012-01-01

    One of the well-known problems of producing instruments for Extremely Large Telescopes is that their size (and hence cost) scales rapidly with telescope aperture. To try to break this relation alternative new technologies have been proposed, such as the use of the Integrated Photonic Spectrograph (IPS). Due to their diffraction limited nature the IPS is claimed to defeat the harsh scaling law applying to conventional instruments. The problem with astronomical applications is that unlike conventional photonics, they are not usually fed by diffraction limited sources. This means in order to retain throughput and spatial information the IPS will require multiple Arrayed Waveguide Gratings (AWGs) and a photonic lantern. We investigate the implications of these extra components on the size of the instrument. We also investigate the potential size advantage of using an IPS as opposed to conventional monolithic optics. To do this, we have constructed toy models of IPS and conventional image sliced spectrographs to c...

  10. Urban design and modeling: applications and perspectives on GIS

    Directory of Open Access Journals (Sweden)

    Roberto Mingucci

    2013-05-01

    Full Text Available In recent years, GIS systems have evolved because of technological advancements that make possible the simultaneous management of multiple amount of information.Interesting aspects in their application concern the site documentation at the territorial scale taking advantage of CAD/BIM systems, usually working at the building scale instead.In this sense, the survey using sophisticated equipment such as laser scanners or UAV drones quickly captures data that can be enjoyed across even through new “mobile” technologies, operating in the web-based information systems context. This paper aims to investigate use and perspectives pertaining to geographic information technologies, analysis and design tools meant for modeling at different scales, referring to results of research experiences conducted at the University of Bologna.

  11. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  12. Autonomic Model for Self-Configuring C#.NET Applications

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advances in computational technologies over the last decade, large organizations have been investing in Information Technology to automate their internal processes to cut costs and efficiently support their business projects. However, this comes to a price. Business requirements always change. Likewise, IT systems constantly evolves as developers make new versions of them, which require endless administrative manual work to customize and configure them, especially if they are being used in different contexts, by different types of users, and for different requirements. Autonomic computing was conceived to provide an answer to these ever-changing requirements. Essentially, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic model based on Venn diagram and set theory for self-configuring C#.NET applications, namely the self-customization of their GUI, ev...

  13. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  14. Nanostructured energetic composites: synthesis, ignition/combustion modeling, and applications.

    Science.gov (United States)

    Zhou, Xiang; Torabi, Mohsen; Lu, Jian; Shen, Ruiqi; Zhang, Kaili

    2014-03-12

    Nanotechnology has stimulated revolutionary advances in many scientific and industrial fields, particularly in energetic materials. Powder mixing is the simplest and most traditional method to prepare nanoenergetic composites, and preliminary findings have shown that these composites perform more effectively than their micro- or macro-sized counterparts in terms of energy release, ignition, and combustion. Powder mixing technology represents only the minimum capability of nanotechnology to boost the development of energetic material research, and it has intrinsic limitations, namely, random distribution of fuel and oxidizer particles, inevitable fuel pre-oxidation, and non-intimate contact between reactants. As an alternative, nanostructured energetic composites can be prepared through a delicately designed process. These composites outperform powder-mixed nanocomposites in numerous ways; therefore, we comprehensively discuss the preparation strategies adopted for nanostructured energetic composites and the research achievements thus far in this review. The latest ignition and reaction models are briefly introduced. Finally, the broad promising applications of nanostructured energetic composites are highlighted.

  15. Modifications and Applications of the HERMES model: June - October 2010

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2010-11-16

    The HERMES (High Explosive Response to MEchanical Stimulus) model has been developed to describe the response of energetic materials to low-velocity mechanical stimulus, referred to as HEVR (High Explosive Violent Response) or BVR (Burn to Violent Reaction). For tests performed with an HMX-based UK explosive, at sample sizes less than 200 g, the response was sometimes an explosion, but was not observed to be a detonation. The distinction between explosion and detonation can be important in assessing the effects of the HE response on nearby structures. A detonation proceeds as a supersonic shock wave supported by the release of energy that accompanies the transition from solid to high-pressure gas. For military high explosives, the shock wave velocity generally exceeds 7 km/s, and the pressure behind the shock wave generally exceeds 30 GPa. A kilogram of explosive would be converted to gas in 10 to 15 microseconds. An HEVR explosion proceeds much more slowly. Much of the explosive remains unreacted after the event. Peak pressures have been measured and calculated at less than 1 GPa, and the time for the portion of the solid that does react to form gas is about a millisecond. The explosion will, however, launch the confinement to a velocity that depends on the confinement mass, the mass of explosive converted, and the time required to form gas products. In many tests, the air blast signal and confinement velocity are comparable to those measured when an amount of explosive equal to that which is converted in an HEVR is deliberately detonated in the comparable confinement. The number of confinement fragments from an HEVR is much less than from the comparable detonation. The HERMES model comprises several submodels including a constitutive model for strength, a model for damage that includes the creation of porosity and surface area through fragmentation, an ignition model, an ignition front propagation model, and a model for burning after ignition. We have used HERMES

  16. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas [KTH Royal Institute of Technology, Stockholm (Sweden)

    2007-01-01

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e±, ve, $\\bar{v}$e, vμ and $\\bar{μ}$e--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c2. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a

  17. Investigating inhomogeneous Szekeres models and their applications to precision cosmology

    Science.gov (United States)

    Peel, Austin Chandler

    Exact solutions of Einstein's field equations that can describe the evolution of complex structures in the universe provide complementary frameworks to standard perturbation theory in which to analyze cosmological and astrophysical phenomena. The flexibility and generality of the inhomogeneous and anisotropic Szekeres metric make it the best known exact solution to explore nonlinearities in the universe. We study applications of Szekeres models to precision cosmology, focusing on the influence of inhomogeneities in two primary contexts---the growth rate of cosmic structures and biases in distance determinations to remote sources. We first define and derive evolution equations for a Szekeres density contrast, which quantifies exact deviations from a smooth background cosmology. Solving these equations and comparing to the usual perturbative approach, we find that for models with the same matter content, the Szekeres growth rate is larger through the matter-dominated cosmic era. Including a cosmological constant, we consider exact global perturbations, as well as the evolution of a single extended structure surrounded by an almost homogeneous background. For the former, we use growth data to obtain a best fit Szekeres model and find that it can fit the data as well as the standard Lambda-Cold Dark Matter (LCDM) cosmological model but with different cosmological parameters. Next, to study effects of inhomogeneities on distance measures, we build an exact relativistic Swiss-cheese model of the universe, where a large number of non-symmetric and randomly placed Szekeres structures are embedded within a LCDM background. Solving the full relativistic propagation equations, light beams are traced through the model, where they traverse the inhomogeneous structures in a way that mimics the paths of real light beams in the universe. For beams crossing a single structure, their magnification or demagnification reflects primarily the net density encountered along the path

  18. Management model application at nested spatial levels in Mediterranean Basins

    Science.gov (United States)

    Lo Porto, Antonio; De Girolamo, Anna Maria; Froebrich, Jochen

    2014-05-01

    In the EU Water Framework Directive (WFD) implementation processes, hydrological and water quality models can be powerful tools that allow to design and test alternative management strategies, as well as judging their general feasibility and acceptance. Although in recent decades several models have been developed, their use in Mediterranean basins, where rivers have a temporary character, is quite complex and there is limited information in literature which can facilitate model applications and result evaluations in this region. The high spatial variability which characterizes rainfall events, soil hydrological properties and land uses of Mediterranean basin makes more difficult to simulate hydrological and water quality in this region than in other Countries. This variability also has several implications in modeling simulations results especially when simulations at different spatial scale are needed for watershed management purpose. It is well known that environmental processes operating at different spatial scale determine diverse impacts on water quality status (hydrological, chemical, ecological). Hence, the development of management strategies have to include both large scale (watershed) and local spatial scales approaches (e.g. stream reach). This paper presents the results of a study which analyzes how the spatial scale affects the results of hydrologic process and water quality of model simulations in a Mediterranean watershed. Several aspects involved in modeling hydrological and water quality processes at different spatial scale for river basin management are investigated including model data requirements, data availability, model results and uncertainty. A hydrologic and water quality model (SWAT) was used to simulate hydrologic processes and water quality at different spatial scales in the Candelaro river basin (Puglia, S-E Italy) and to design management strategies to reach as possible WFD goals. When studying a basin to assess its current status

  19. Potential biodefense model applications for portable chlorine dioxide gas production.

    Science.gov (United States)

    Stubblefield, Jeannie M; Newsome, Anthony L

    2015-01-01

    Development of decontamination methods and strategies to address potential infectious disease outbreaks and bioterrorism events are pertinent to this nation's biodefense strategies and general biosecurity. Chlorine dioxide (ClO2) gas has a history of use as a decontamination agent in response to an act of bioterrorism. However, the more widespread use of ClO2 gas to meet current and unforeseen decontamination needs has been hampered because the gas is too unstable for shipment and must be prepared at the application site. Newer technology allows for easy, onsite gas generation without the need for dedicated equipment, electricity, water, or personnel with advanced training. In a laboratory model system, 2 unique applications (personal protective equipment [PPE] and animal skin) were investigated in the context of potential development of decontamination protocols. Such protocols could serve to reduce human exposure to bacteria in a decontamination response effort. Chlorine dioxide gas was capable of reducing (2-7 logs of vegetative and spore-forming bacteria), and in some instances eliminating, culturable bacteria from difficult to clean areas on PPE facepieces. The gas was effective in eliminating naturally occurring bacteria on animal skin and also on skin inoculated with Bacillus spores. The culturable bacteria, including Bacillus spores, were eliminated in a time- and dose-dependent manner. Results of these studies suggested portable, easily used ClO2 gas generation systems have excellent potential for protocol development to contribute to biodefense strategies and decontamination responses to infectious disease outbreaks or other biothreat events.

  20. The modelling of a capacitive microsensor for biosensing applications

    Science.gov (United States)

    Bezuidenhout, P. H.; Schoeman, J.; Joubert, T. H.

    2014-06-01

    Microsensing is a leading field in technology due to its wide application potential, not only in bio-engineering, but in other fields as well. Microsensors have potentially low-cost manufacturing processes, while a single device type can have various uses, and this consequently helps with the ever-growing need to provide better health conditions in rural parts of the world. Capacitive biosensors detect a change in permittivity (or dielectric constant) of a biological material, usually within a parallel plate capacitor structure which is often implemented with integrated electrodes of an inert metal such as gold or platinum on a microfluidic substrate typically with high dielectric constant. There exist parasitic capacitance components in these capacitive sensors, which have large influence on the capacitive measurement. Therefore, they should be considered for the development of sensitive and accurate sensing devices. An analytical model of a capacitive sensor device is discussed, which accounts for these parasitic factors. The model is validated with a laboratory device of fixed geometry, consisting of two parallel gold electrodes on an alumina (Al2O3) substrate mounted on a glass microscope slide, and with a windowed cover layer of poly-dimethyl-siloxane (PDMS). The thickness of the gold layer is 1μm and the electrode spacing is 300μm. The alumina substrate has a thickness of 200μm, and the high relative permittivity of 11.5 is expected to be a significantly contributing factor to the total device capacitance. The 155μm thick PDMS layer is also expected to contribute substantially to the total device capacitance since the relative permittivity for PDMS is 2.7. The wideband impedance analyser evaluation of the laboratory device gives a measurement result of 2pF, which coincides with the model results; while the handheld RLC meter readout of 4pF at a frequency of 10kHz is acceptable within the measurement accuracy of the instrument. This validated model will

  1. Mesoscale meteorological modelling for Hong Kong-application of the MC2 model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper describes the set-up and application of a non-hydrostatic Canadian meteorological numerical model (MC2) for mesoscale simulations of wind field and other meteorological parameters over the complex terrain of Hong Kong. Results of the simulations of one case are presented and compared with the results of radiosonde and aircraft measurements. The model is proved capable of predicting high-resolution,three-dimensional fields of wind and other meteorological parameters within the Hong Kong territory, using reasonable computer time and memory resources.

  2. X-ray ablation measurements and modeling for ICF applications

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Andrew Thomas [Univ. of California, Berkeley, CA (United States)

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths (~ micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation.

  3. Animal models of osteogenesis imperfecta: applications in clinical research

    Directory of Open Access Journals (Sweden)

    Enderli TA

    2016-09-01

    Full Text Available Tanya A Enderli, Stephanie R Burtch, Jara N Templet, Alessandra Carriero Department of Biomedical Engineering, Florida Institute of Technology, Melbourne, FL, USA Abstract: Osteogenesis imperfecta (OI, commonly known as brittle bone disease, is a genetic disease characterized by extreme bone fragility and consequent skeletal deformities. This connective tissue disorder is caused by mutations in the quality and quantity of the collagen that in turn affect the overall mechanical integrity of the bone, increasing its vulnerability to fracture. Animal models of the disease have played a critical role in the understanding of the pathology and causes of OI and in the investigation of a broad range of clinical therapies for the disease. Currently, at least 20 animal models have been officially recognized to represent the phenotype and biochemistry of the 17 different types of OI in humans. These include mice, dogs, and fish. Here, we describe each of the animal models and the type of OI they represent, and present their application in clinical research for treatments of OI, such as drug therapies (ie, bisphosphonates and sclerostin and mechanical (ie, vibrational loading. In the future, different dosages and lengths of treatment need to be further investigated on different animal models of OI using potentially promising treatments, such as cellular and chaperone therapies. A combination of therapies may also offer a viable treatment regime to improve bone quality and reduce fragility in animals before being introduced into clinical trials for OI patients. Keywords: OI, brittle bone, clinical research, mouse, dog, zebrafish

  4. X-ray ablation measurements and modeling for ICF applications

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, A.T.

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths ({approx} micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation.

  5. Assembly interruptability robustness model with applications to Space Station Freedom

    Science.gov (United States)

    Wade, James William

    1991-02-01

    Interruptability robustness of a construction project together with its assembly sequence may be measured by calculating the probability of its survival and successful completion in the face of unplanned interruptions of the assembly process. Such an interruption may jeopardize the survival of the structure being assembled, the survival of the support equipment, and/or the safety of the members of the construction crew, depending upon the stage in the assembly sequence when the interruption occurs. The interruption may be due to a number of actors such as: machinery break-downs, environmental damage, worker emergency illness or injury, etc. Each source of interruption has a probability of occurring, and adds an associated probability of loss, schedule delay, and cost to the project. Several options may exist for reducing the consequences of an interruption at a given point in the assembly sequence, including altering the assembly sequence, adding extra components or equipment as interruptability 'insurance', increasing the capability of support facilities, etc. Each option may provide a different overall performance of the project as it relates to success, assembly time, and project cost. The Interruptability Robustness Model was devised and provides a method which allows the overall interruptability robustness of construction of a project design and its assembly sequence to be quantified. In addition, it identifies the susceptibility to interruptions for the assembly sequence at all points within the assembly sequence. The model is applied to the present problem of quantifying and improving interruptability robustness during the construction of Space Station Freedom. This application was used as a touchstone for devising the Interruptability Robustness Model. However, the model may be utilized to assist in the analysis of interruptability robustness for other space-related construction projects such as the lunar base and orbital assembly of the manned Mars

  6. CFD Modeling in Development of Renewable Energy Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Chapter 1: A Multi-fluid Model to Simulate Heat and Mass Transfer in a PEM Fuel Cell. Torsten Berning, Madeleine Odgaard, Søren K. Kær Chapter 2: CFD Modeling of a Planar Solid Oxide Fuel Cell (SOFC for Clean Power Generation. Meng Ni Chapter 3: Hydrodynamics and Hydropower in the New Paradigm for a Sustainable Engineering. Helena M. Ramos, Petra A. López-Jiménez Chapter 4: Opportunities for CFD in Ejector Solar Cooling. M. Dennis Chapter 5: Three Dimensional Modelling of Flow Field Around a Horizontal Axis Wind Turbine (HAWT. Chaouki Ghenai, Armen Sargsyan, Isam Janajreh Chapter 6: Scaling Rules for Hydrodynamics and Heat Transfer in Jetting Fluidized-Bed Biomass Gasifiers. K. Zhang, J. Chang, P. Pei, H. Chen, Y. Yang Chapter 7: Investigation of Low Reynolds Number Unsteady Flow around Airfoils in Pitching, Plunging and Flapping Motions. M.R. Amiralaei, H. Alighanbari, S.M. Hashemi Chapter 8: Justification of Computational Fluid Dynamics Simulation for Flat Plate Solar Energy Collector. Mohamed Selmi, Mohammed J. Al-Khawaja, Abdulhamid Marafia Chapter 9: Comparative Performance of a 3-Bladed Airfoil Chord H-Darrieus and a 3-Bladed Straight Chord H-Darrieus Turbines using CFD. R. Gupta, Agnimitra Biswas Chapter 10: Computational Fluid Dynamics for PEM Fuel Cell Modelling. A. Iranzo, F. Rosa Chapter 11: Analysis of the Performance of PEM Fuel Cells: Tutorial of Major Functional and Constructive Characteristics using CFD Analysis. P.J. Costa Branco, J.A. Dente Chapter 12: Application of Techniques of Computational Fluid Dynamics in the Design of Bipolar Plates for PEM Fuel Cells. A.P. Manso, F.F. Marzo, J. Barranco, M. Garmendia Mujika.

  7. FEM application for modelling of PVD coatings properties

    Directory of Open Access Journals (Sweden)

    A. Śliwa

    2010-07-01

    Full Text Available Purpose: The general topic of this paper is problem of determining the internal stresses of composite tool materials with the use of finite element method (FEM. The chemical composition of the investigated materials’ core is corresponding to the M2 high-speed steel and was reinforced with the WC and TiC type hard carbide phases with the growing portions of these phases in the outward direction from the core to the surface. Such composed material was sintered, heat treated and deposited appropriately with (Ti,AlN or Ti(C,N coatings.Design/methodology/approach: Modelling of stresses was performed with the help of finite element method in ANSYS environment, and the experimental values of stresses were determined basing on the X-ray diffraction patterns. The computer simulation results were compared with the experimental results.Findings: Computer aided numerical analysis gives the possibility to select the optimal parameters for coatings covering in PVD process determining the stresses in coatings, employing the finite element method using the ANSYS software.Research limitations/implications: It was confirmed that using of finite element method in stresses modelling occurring in advanced composite materials can be a way for reducing the investigation costs. In order to reach this purpose, it was used in the paper a simplified model of composite materials with division on zones with established physical and mechanical properties. Results reached in this way are satisfying and in slight degree differ from results reached by experimental method.Originality/value: Nowadays the computer simulation is very popular and it is based on the finite element method, which allows to better understand the interdependence between parameters of process and choosing optimal solution. The possibility of application faster and faster calculation machines and coming into being many software make possible the creation of more precise models and more adequate ones to

  8. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  9. Investigation of 1H NMR Profile of Vegetarian Human Urine Using ANOVA-based Multi-factor Analysis%素食人群尿液1H NMR代谢轮廓的多因素方差分析

    Institute of Scientific and Technical Information of China (English)

    董继扬; 邓伶莉; CHENG Kian-Kai; GRIFFIN Julian L.; 陈忠

    2011-01-01

    结合方差分析(ANOVA)和偏最小二乘法判别分析(PLS-DA)两种分析技术,对素食和普食人群的尿液1H NMR谱进行分析.利用ANOVA方法将数据矩阵分解为几个独立因素矩阵,滤除干扰因素后,再利用PLS-DA对单因素数据进行建模分析.实验结果表明,ANOVA/PLS-DA方法可以有效地减少饮食因素和性别因素之间的相互影响,使分析结果更具有生物学意义.%In this study, a technique that combined both analysis of variance ( ANOVA) and partial least squares-discriminant analysis (PLS-DA) was used to compare the urine XH NMR spectra of healthy people from a vegetarian and omnivorous population. In ANOVA/PLS-DA, the variation in data was first decomposed into different variance components that each contains a single source of variation. Each of the resulting variance components was then analyzed using PLS-DA. The experimental results showed that ANOVA/PLS-DA is efficient in disentangling the effect of diet and gender on die metabolic profile, and the method could be used to extract biologically relevant information for result interpretation.

  10. Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears

    Science.gov (United States)

    Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.

    2004-01-01

     Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.

  11. A GUIDED SWAT MODEL APPLICATION ON SEDIMENT YIELD MODELING IN PANGANI RIVER BASIN: LESSONS LEARNT

    Directory of Open Access Journals (Sweden)

    Preksedis Marco Ndomba

    2008-12-01

    Full Text Available The overall objective of this paper is to report on the lessons learnt from applying Soil and Water Assessment Tool (SWAT in a well guided sediment yield modelling study. The study area is the upstream of Pangani River Basin (PRB, the Nyumba Ya Mungu (NYM reservoir catchment, located in the North Eastern part of Tanzania. It should be noted that, previous modeling exercises in the region applied SWAT with preassumption that inter-rill or sheet erosion was the dominant erosion type. In contrast, in this study SWAT model application was guided by results of analysis of high temporal resolution of sediment flow data and hydro-meteorological data. The runoff component of the SWAT model was calibrated from six-years (i.e. 1977–1982 of historical daily streamflow data. The sediment component of the model was calibrated using one-year (1977–1988 daily sediment loads estimated from one hydrological year sampling programme (between March and November, 2005 rating curve. A long-term period over 37 years (i.e. 1969–2005 simulation results of the SWAT model was validated to downstream NYM reservoir sediment accumulation information. The SWAT model captured 56 percent of the variance (CE and underestimated the observed daily sediment loads by 0.9 percent according to Total Mass Control (TMC performance indices during a normal wet hydrological year, i.e., between November 1, 1977 and October 31, 1978, as the calibration period. SWAT model predicted satisfactorily the long-term sediment catchment yield with a relative error of 2.6 percent. Also, the model has identified erosion sources spatially and has replicated some erosion processes as determined in other studies and field observations in the PRB. This result suggests that for catchments where sheet erosion is dominant SWAT model may substitute the sediment-rating curve. However, the SWAT model could not capture the dynamics of sediment load delivery in some seasons to the catchment outlet.

  12. A GUIDED SWAT MODEL APPLICATION ON SEDIMENT YIELD MODELING IN PANGANI RIVER BASIN: LESSONS LEARNT

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available The overall objective of this paper is to report on the lessons learnt from applying Soil and Water Assessment Tool (SWAT in a well guided sediment yield modelling study. The study area is the upstream of Pangani River Basin (PRB, the Nyumba Ya Mungu (NYM reservoir catchment, located in the North Eastern part of Tanzania. It should be noted that, previous modeling exercises in the region applied SWAT with preassumption that inter-rill or sheet erosion was the dominant erosion type. In contrast, in this study SWAT model application was guided by results of analysis of high temporal resolution of sediment flow data and hydro-meteorological data. The runoff component of the SWAT model was calibrated from six-years (i.e. 1977¿1982 of historical daily streamflow data. The sediment component of the model was calibrated using one-year (1977-1988 daily sediment loads estimated from one hydrological year sampling programme (between March and November, 2005 rating curve. A long-term period over 37 years (i.e. 1969-2005 simulation results of the SWAT model was validated to downstream NYM reservoir sediment accumulation information. The SWAT model captured 56 percent of the variance (CE and underestimated the observed daily sediment loads by 0.9 percent according to Total Mass Control (TMC performance indices during a normal wet hydrological year, i.e., between November 1, 1977 and October 31, 1978, as the calibration period. SWAT model predicted satisfactorily the long-term sediment catchment yield with a relative error of 2.6 percent. Also, the model has identified erosion sources spatially and has replicated some erosion processes as determined in other studies and field observations in the PRB. This result suggests that for catchments where sheet erosion is dominant SWAT model may substitute the sediment-rating curve. However, the SWAT model could not capture the dynamics of sediment load delivery in some seasons to the catchment outlet.

  13. Optimization of friction welding by taguchi and ANOVA method on commercial aluminium tube to Al 2025 tube plate with backing block using an external tool

    Energy Technology Data Exchange (ETDEWEB)

    Kanna, S.; Kumaraswamidhs, L. A. [Indian Institute of Technology, Dhanbad (India); Kumaran, S. Senthil [RVS School of Engineering and Technology, Dindigul (India)

    2016-05-15

    The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.

  14. 3-dimensional modeling of transcranial magnetic stimulation: Design and application

    Science.gov (United States)

    Salinas, Felipe Santiago

    Over the past three decades, transcranial magnetic stimulation (TMS) has emerged as an effective tool for many research, diagnostic and therapeutic applications in humans. TMS delivers highly localized brain stimulations via non-invasive externally applied magnetic fields. This non-invasive, painless technique provides researchers and clinicians a unique tool capable of stimulating both the central and peripheral nervous systems. However, a complete analysis of the macroscopic electric fields produced by TMS has not yet been performed. In this dissertation, we present a thorough examination of the total electric field induced by TMS in air and a realistic head model with clinically relevant coil poses. In the first chapter, a detailed account of TMS coil wiring geometry was shown to provide significant improvements in the accuracy of primary E-field calculations. Three-dimensional models which accounted for the TMS coil's wire width, height, shape and number of turns clearly improved the fit of calculated-to-measured E-fields near the coil body. Detailed primary E-field models were accurate up to the surface of the coil body (within 0.5% of measured values) whereas simple models were often inadequate (up to 32% different from measured). In the second chapter, we addressed the importance of the secondary E-field created by surface charge accumulation during TMS using the boundary element method (BEM). 3-D models were developed using simple head geometries in order to test the model and compare it with measured values. The effects of tissue geometry, size and conductivity were also investigated. Finally, a realistic head model was used to assess the effect of multiple surfaces on the total E-field. We found that secondary E-fields have the greatest impact at areas in close proximity to each tissue layer. Throughout the head, the secondary E-field magnitudes were predominantly between 25% and 45% of the primary E-fields magnitude. The direction of the secondary E

  15. Nonrelativistic anti-Snyder model and some applications

    Science.gov (United States)

    Ching, C. L.; Yeo, C. X.; Ng, W. K.

    2017-01-01

    In this paper, we examine the (2+1)-dimensional Dirac equation in a homogeneous magnetic field under the nonrelativistic anti-Snyder model which is relevant to doubly/deformed special relativity (DSR) since it exhibits an intrinsic upper bound of the momentum of free particles. After setting up the formalism, exact eigensolutions are derived in momentum space representation and they are expressed in terms of finite orthogonal Romanovski polynomials. There is a finite maximum number of allowable bound states nmax due to the orthogonality of the polynomials and the maximum energy is truncated at nmax. Similar to the minimal length case, the degeneracy of the Dirac-Landau levels in anti-Snyder model are modified and there are states that do not exist in the ordinary quantum mechanics limit β → 0. By taking m → 0, we explore the motion of effective massless charged fermions in graphene-like material and obtained a maximum bound of deformed parameter βmax. Furthermore, we consider the modified energy dispersion relations and its application in describing the behavior of neutrinos oscillation under modified commutation relations.

  16. Application of Stochastic Partial Differential Equations to Reservoir Property Modelling

    KAUST Repository

    Potsepaev, R.

    2010-09-06

    Existing algorithms of geostatistics for stochastic modelling of reservoir parameters require a mapping (the \\'uvt-transform\\') into the parametric space and reconstruction of a stratigraphic co-ordinate system. The parametric space can be considered to represent a pre-deformed and pre-faulted depositional environment. Existing approximations of this mapping in many cases cause significant distortions to the correlation distances. In this work we propose a coordinate free approach for modelling stochastic textures through the application of stochastic partial differential equations. By avoiding the construction of a uvt-transform and stratigraphic coordinates, one can generate realizations directly in the physical space in the presence of deformations and faults. In particular the solution of the modified Helmholtz equation driven by Gaussian white noise is a zero mean Gaussian stationary random field with exponential correlation function (in 3-D). This equation can be used to generate realizations in parametric space. In order to sample in physical space we introduce a stochastic elliptic PDE with tensor coefficients, where the tensor is related to correlation anisotropy and its variation is physical space.

  17. Application of a theoretical model to evaluate COPD disease management

    Directory of Open Access Journals (Sweden)

    Asin Javier D

    2010-03-01

    Full Text Available Abstract Background Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD on process, intermediate and final outcomes of care in a general practice setting. Methods A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences were obtained from questionnaires and electronic registries. Results Implementation of the programme was associated with significant improvements in dyspnoea (p Conclusions The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.

  18. Mathematical models for foam-diverted acidizing and their applications

    Institute of Scientific and Technical Information of China (English)

    Li Songyan; Li Zhaomin; Lin Riyi

    2008-01-01

    Foam diversion can effectively solve the problem of uneven distribution of acid in layers of different permeabilities during matrix acidizing.Based on gas trapping theory and the mass conservation equation,mathematical models were developed for foam-diverted acidizing,which can be achieved by a foam slug followed by acid injection or by continuous injection of foamed acid.The design method for foam-diverted acidizing was also given.The mathematical models were solved by a computer program.Computed results show that the total formation skin factor,wellhead pressure and bottomhole pressure increase with foam injection,but decrease with acid injection.Volume flow rate in a highpermeability layer decreases,while that in a low-permeability layer increases,thus diverting acid to the low-permeability layer from the high-permeability layer.Under the same formation conditions,for foamed acid treatment the operation was longer,and wellhead and bottomhole pressures are higher.Field application shows that foam slug can effectively block high permeability layers,and improve intake profile noticeably.

  19. Risk management modeling and its application in maritime safety

    Institute of Scientific and Technical Information of China (English)

    QIN Ting-rong; CHEN Wei-jiong; ZENG Xiang-kun

    2008-01-01

    Quantified risk assessment (QRA) needs mathematicization of risk theory. However,attention has been paid almost exclusively to applications of assessment methods,which has led to neglect of research into fundamental theories,such as the relationships among risk,safety,danger,and so on. In order to solve this problem,as a first step,fundamental theoretical relationships about risk and risk management were analyzed for this paper in the light of mathematics,and then illustrated with some charts. Second,man-machine-environment-management (MMEM) theory was introduced into risk theory to analyze some properties of risk. On the basis of this,a three-dimensional model of risk management was established that includes: a goal dimension;a management dimension;an operation dimension. This goal management operation (GMO) model was explained and then emphasis was laid on the discussion of the risk flowchart (operation dimension),which lays the groundwork for further study of risk management and qualitative and quantitative assessment. Next,the relationship between Formal Safety Assessment (FSA) and Risk Management was researched. This revealed that the FSA method,which the international maritime organization (IMO) is actively spreading,comes from Risk Management theory. Finally,conclusion were made about how to apply this risk management method to concrete fields efficiently and conveniently,as well as areas where further research is required.

  20. Applications of products obtained from digital terrain models

    Energy Technology Data Exchange (ETDEWEB)

    Torres, M.A.; Castro, J.A.D. [Instituto Geografico Agustin Codazzi, Santafe de Bogota (Colombia)

    1996-11-01

    The representation of relief is a fundamental component of the cartographic process. A wide range of techniques representing the topographic variations of the earth`s surface on a two dimensional surface have been developed and these vary both in their symbolic content and in their degree of realism. In even that the Digital Terrain Models have been used since finals sixty`s decade, recently it has been increased their multiple applications and uses in different technical areas and of engineering, especially in Geographic Information Systems where they are need to data georeferencing. With this work we want to contribute in a technical, efficient and economical manner to the diverse processes that involve the usefulness of the DTM and especially to present the possibility of capturing data with a photogrammetric method alternative to the conventional method. The quality of the data source, the accuracy, the resolution and the ability to define small details, the hardware, methods and procedures to capture and processing data are the most important parameters when considering the generation of the Digital Terrain Models. 5 refs., 2 figs.