WorldWideScience

Sample records for anova models application

  1. On testing variance components in ANOVA models

    OpenAIRE

    Hartung, Joachim; Knapp, Guido

    2000-01-01

    In this paper we derive asymptotic x 2 - tests for general linear hypotheses on variance components using repeated variance components models. In two examples, the two-way nested classification model and the two-way crossed classification model with interaction, we explicitly investigate the properties of the asymptotic tests in small sample sizes.

  2. An introduction to (smoothing spline) ANOVA models in RKHS with examples in geographical data, medicine, atmospheric science and machine learning

    OpenAIRE

    Wahba, Grace

    2004-01-01

    Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces (RKHS) provide a very general framework for data analysis, modeling and learning in a variety of fields. Discrete, noisy scattered, direct and indirect observations can be accommodated with multiple inputs and multiple possibly correlated outputs and a variety of meaningful structures. The purpose of this paper is to give a brief overview of the approach and describe and contrast a series of applications, while noti...

  3. Predicting Reading Proficiency in Multilevel Models: An ANOVA-Like Approach of Interpreting Effects

    Science.gov (United States)

    Subedi, Bidya Raj

    2007-01-01

    This study used an analysis of variance (ANOVA)-like approach to predict reading proficiency with student, teacher, and school-level predictors based on a 3-level hierarchical generalized linear model (HGLM) analysis. National Assessment of Educational Progress (NAEP) 2000 reading data for 4th graders sampled from 46 states of the United States of…

  4. Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA

    KAUST Repository

    Jung, Yoonsuh

    2014-10-02

    In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.

  5. Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat

    Directory of Open Access Journals (Sweden)

    Philip W. Iversen

    2004-12-01

    Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.

  6. Tests for ANOVA models with a combination of crossed and nested designs under heteroscedasticity

    Science.gov (United States)

    Xu, Liwen; Tian, Maozai

    2016-06-01

    In this article we consider unbalanced ANOVA models with a combination of crossed and nested designs under heteroscedasticity. For the problem of testing no nested interaction effects, we propose two tests based on a parametric bootstrap (PB) approach and a generalized p-value approach, respectively. The PB test does not depend on the chosen weights used to define the parameters uniquely. These two tests are compared through their simulated Type I error rates and powers. The simulations indicate that the PB test outperforms the generalized p-value test. The PB test performs very satisfactorily even for extensive cases of samples while the generalized p-value test has Type I error rates much less than the nominal level most of the time. Both tests exhibit similar power properties provided the Type I error rates are close to each other. In some cases, the GF test appears to be more powerful than the PB tests because of its inflated Type I error rates.

  7. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Science.gov (United States)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  8. THE QUANTITATIVE AND QUALITATIVE ANALYSIS OF WOVEN FABRICS TYPE WOOL SURFACE CHARACTERISTIC USING ANOVA MODEL

    OpenAIRE

    Liliana Hristian; Demetra Lăcrămioara Bordeianu; Iuliana Gabriela Lupu

    2013-01-01

    Three different woven fabrics made from yarns type wool have been studied regarding pilling resistenace. Impact of number of abrasion cycles and pressure force on surface characteristic was studied. The experiemental data were analyzed by multifactorial analysis of variance (ANOVA). Pilling is a typical manifestation of plane textiles, which consists in formation on textiles surface, of some fiber agglomerations as result of friction forces action. The experimental work was carried out in la...

  9. Application of Anova on Fly Ash Leaching Kinetics for Value Addition

    Science.gov (United States)

    Swain, Ranjita; Mohapatro, Rudra Narayana; Bhima Rao, Raghupatruni

    2016-04-01

    Fly ash is a major problem in power plant sectors as it is dumped at the plant site. Fly ash generation increases day to day due to rapid growth of steel industries. Ceramic/refractory industries are growing rapidly because of more number of steel industries. The natural resources of the ceramic/refractory raw materials are depleting with time due to its consumption. In view of this, fly ash from thermal power plant has been identified for use in the ceramic/refractory industries after suitable beneficiation. In this paper, sample was collected from the ash pond of Vedanta. Particle size (d80 passing size) of the sample is around 150 micron. The chemical analysis of the sample shows that 3.9 % of Fe2O3 and CaO is more than 10 %. XRD patterns show that the fly ash samples consist predominantly of the crystalline phases of quartz, hematite and magnetite in a matrix of aluminosilicate glass. Leaching of iron oxide is 98.3 % at 3 M HCl concentration at 90 °C for 270 min of leaching time. Kinetic study on leaching experiment was carried out. ANOVA software is utilized for curve fitting and the process is optimized using MATLAB 7.1. The detailed study of properties for ceramic material is compared with the standard ceramic materials. The product contains 0.3 % of iron. The other properties of the product have established the fact that the product obtained can be a raw material for ceramic industries.

  10. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges.

    Science.gov (United States)

    Macey, Paul M; Schluter, Philip J; Macey, Katherine E; Harper, Ronald M

    2016-01-01

    We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA). The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI) volumes-of-interest (VOI) during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function "lme". Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex response patterns. PMID

  11. ANOVA model for network meta-analysis of diagnostic test accuracy data

    OpenAIRE

    Nyaga, Victoria; Aerts, Marc; Arbyn, Marc

    2016-01-01

    Network meta-analysis (NMA) allow combining efficacy information from multiple comparisons from trials assessing different therapeutic interventions for a given disease and to estimate unobserved comparisons from a network of observed comparisons. Applying NMA on diagnostic accuracy studies is a statistical challenge given the inherent correlation of sensitivity and specificity. A conceptually simple and novel hierarchical arm-based (AB) model which expresses the logit transformed sensitivity...

  12. Simultaneous Optimality of LSE and ANOVA Estimate in General Mixed Models

    Institute of Scientific and Technical Information of China (English)

    Mi Xia WU; Song Gui WANG; Kai Fun YU

    2008-01-01

    Problems of the simultaneous optimal estimates and the optimal tests in general mixed models are considered.A necessary and sufficient condition is presented for the least squares estimate of the fixed effects and the analysis of variance (Hendreson III's) estimate of variance components being uniformly minimum variance unbiased estimates simultaneously.This result can be applied to the problems of finding uniformly optimal unbiased tests and uniformly most accurate unbiased confidential interval on parameters of interest,and for finding equivalences of several common estimates of variance components.

  13. Are drought occurrence and severity aggravating? A study on SPI drought class transitions using log-linear models and ANOVA-like inference

    Directory of Open Access Journals (Sweden)

    E. E. Moreira

    2012-08-01

    Full Text Available Long time series (95 to 135 yr of the 12-month time scale Standardized Precipitation Index (SPI relative to 10 locations across Portugal were studied with the aim of investigating if drought frequency and severity are changing through time. Considering four drought severity classes, time series of drought class transitions were computed and later divided into several sub-periods according to the length of SPI time series. Drought class transitions were calculated to form a 2-dimensional contingency table for each sub-period, which refer to the number of transitions among drought severity classes. Two-dimensional log-linear models were fitted to these contingency tables and an ANOVA-like inference was then performed in order to investigate differences relative to drought class transitions among those sub-periods, which were considered as treatments of only one factor. The application of ANOVA-like inference to these data allowed to compare the sub-periods in terms of probabilities of transition between drought classes, which were used to detect a possible trend in droughts frequency and severity. Results for a number of locations show some similarity between alternate sub-periods and differences between consecutive ones regarding the persistency of severe/extreme and sometimes moderate droughts. In global terms, results do not support the assumption of a trend for progressive aggravation of drought occurrence during the last century, but rather suggest the existence of long duration cycles.

  14. Introducing ANOVA and ANCOVA a GLM approach

    CERN Document Server

    Rutherford, Andrew

    2000-01-01

    Traditional approaches to ANOVA and ANCOVA are now being replaced by a General Linear Modeling (GLM) approach. This book begins with a brief history of the separate development of ANOVA and regression analyses and demonstrates how both analysis forms are subsumed by the General Linear Model. A simple single independent factor ANOVA is analysed first in conventional terms and then again in GLM terms to illustrate the two approaches. The text then goes on to cover the main designs, both independent and related ANOVA and ANCOVA, single and multi-factor designs. The conventional statistical assumptions underlying ANOVA and ANCOVA are detailed and given expression in GLM terms. Alternatives to traditional ANCO

  15. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology

    OpenAIRE

    Lazic Stanley E

    2008-01-01

    Abstract Background Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simple...

  16. Are droughts occurrence and severity aggravating? A study on SPI drought class transitions using loglinear models and ANOVA-like inference

    Directory of Open Access Journals (Sweden)

    E. E. Moreira

    2011-12-01

    Full Text Available Long time series (95 to 135 yr of the Standardized Precipitation Index (SPI computed with the 12-month time scale relative to 10 locations across Portugal were studied with the aim of investigating if drought frequency and severity are changing through time. Considering four drought severity classes, time series of drought class transitions were computed and later divided into 4 or 5 sub-periods according to length of time series. Drought class transitions were calculated to form a 2-dimensional contingency table for each period. Two-dimensional loglinear models were fitted to these contingency tables and an ANOVA-like inference was then performed in order to investigate differences relative to drought class transitions among those sub-periods, which were considered as treatments of only one factor. The application of ANOVA-like inference to these data allowed to compare the four or five sub-periods in terms of probabilities of transition between drought classes, which were used to detect a possible trend in time evolution of droughts frequency and severity that could be related to climate change. Results for a number of locations show some similarity between the first, third and fifth period (or the second and the fourth if there were only 4 sub-periods regarding the persistency of severe/extreme and sometimes moderate droughts. In global terms, results do not support the assumption of a trend for progressive aggravation of droughts occurrence during the last century, but rather suggest the existence of long duration cycles.

  17. ANOVA and ANCOVA A GLM Approach

    CERN Document Server

    Rutherford, Andrew

    2012-01-01

    Provides an in-depth treatment of ANOVA and ANCOVA techniques from a linear model perspective ANOVA and ANCOVA: A GLM Approach provides a contemporary look at the general linear model (GLM) approach to the analysis of variance (ANOVA) of one- and two-factor psychological experiments. With its organized and comprehensive presentation, the book successfully guides readers through conventional statistical concepts and how to interpret them in GLM terms, treating the main single- and multi-factor designs as they relate to ANOVA and ANCOVA. The book begins with a brief history of the separate dev

  18. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology

    Directory of Open Access Journals (Sweden)

    Lazic Stanley E

    2008-07-01

    Full Text Available Abstract Background Analysis of variance (ANOVA is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. Results An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159. This will be true in general, and the reasons for this are discussed. Conclusion There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.

  19. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Paul M. Macey

    2016-07-01

    Full Text Available We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA. The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI volumes-of-interest (VOI during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function “lme”. Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex

  20. Teaching Principles of Inference with ANOVA

    Science.gov (United States)

    Tarlow, Kevin R.

    2016-01-01

    Analysis of variance (ANOVA) is a test of "mean" differences, but the reference to "variances" in the name is often overlooked. Classroom activities are presented to illustrate how ANOVA works with emphasis on how to think critically about inferential reasoning.

  1. Foliar Sprays of Citric Acid and Malic Acid Modify Growth, Flowering, and Root to Shoot Ratio of Gazania (Gazania rigens L.: A Comparative Analysis by ANOVA and Structural Equations Modeling

    Directory of Open Access Journals (Sweden)

    Majid Talebi

    2014-01-01

    Full Text Available Foliar application of two levels of citric acid and malic acid (100 or 300 mg L−1 was investigated on flower stem height, plant height, flower performance and yield indices (fresh yield, dry yield and root to shoot ratio of Gazania. Distilled water was applied as control treatment. Multivariate analysis revealed that while the experimental treatments had no significant effect on fresh weight and the flower count, the plant dry weight was significantly increased by 300 mg L−1 malic acid. Citric acid at 100 and 300 mg L−1 and 300 mg L−1 malic acid increased the root fresh weight significantly. Both the plant height and peduncle length were significantly increased in all applied levels of citric acid and malic acid. The display time of flowers on the plant increased in all treatments compared to control treatment. The root to shoot ratio was increased significantly in 300 mg L−1 citric acid compared to all other treatments. These findings confirm earlier reports that citric acid and malic acid as environmentally sound chemicals are effective on various aspects of growth and development of crops. Structural equations modeling is used in parallel to ANOVA to conclude the factor effects and the possible path of effects.

  2. ANOVA for the behavioral sciences researcher

    CERN Document Server

    Cardinal, Rudolf N

    2013-01-01

    This new book provides a theoretical and practical guide to analysis of variance (ANOVA) for those who have not had a formal course in this technique, but need to use this analysis as part of their research.From their experience in teaching this material and applying it to research problems, the authors have created a summary of the statistical theory underlying ANOVA, together with important issues, guidance, practical methods, references, and hints about using statistical software. These have been organized so that the student can learn the logic of the analytical techniques but also use the

  3. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  4. ANOVA like analysis of cancer death age

    Science.gov (United States)

    Areia, Aníbal; Mexia, João T.

    2016-06-01

    We use ANOVA to study the influence of year, sex, country and location on the average cancer death age. The data used was from the World Health Organization (WHO) files for 1999, 2003, 2007 and 2011. The locations considered were: kidney, leukaemia, melanoma of skin and oesophagus and the countries: Portugal, Norway, Greece and Romania.

  5. Smoothing spline ANOVA for super-large samples: Scalable computation via rounding parameters

    OpenAIRE

    Helwig, Nathaniel E.; Ma, Ping

    2016-01-01

    In the current era of big data, researchers routinely collect and analyze data of super-large sample sizes. Data-oriented statistical methods have been developed to extract information from super-large data. Smoothing spline ANOVA (SSANOVA) is a promising approach for extracting information from noisy data; however, the heavy computational cost of SSANOVA hinders its wide application. In this paper, we propose a new algorithm for fitting SSANOVA models to super-large sample data. In this algo...

  6. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Science.gov (United States)

    Liao, Qifeng; Lin, Guang

    2016-07-01

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  7. Tests of Linear Hypotheses in the ANOVA under Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    Jin-Ting Zhang

    2013-05-01

    Full Text Available It is often interest to undertake a general linear hypothesis testing (GLHTproblem in the one-way  ANOVA without assuming the equality of thegroup variances. When the equality of the group variances is valid,it is well known that the GLHT problem can be solved by the classical F-test. The classical F-test, however,  may  lead to misleading conclusions when the variance homogeneity assumption is seriously violated since it doesnot take the group variance heteroscedasticity into account. To ourknowledge, little work has been done for this heteroscedastic GLHTproblem  except for some special cases. In this paper, we propose asimple approximate Hotelling T2 (AHT test.  We show that the AHTtest is invariant under affine-transformations, different choices ofthe coefficient matrix used to define the same hypothesis, anddifferent labeling schemes of the group means. Simulations and realdata applications indicate that the AHT test is comparable with oroutperforms some well-known approximate solutions proposed for the k-sample Behrens-Fisher problem which is a special case of theheteroscedastic GLHT problem.

  8. Foliar Sprays of Citric Acid and Malic Acid Modify Growth, Flowering, and Root to Shoot Ratio of Gazania (Gazania rigens L.): A Comparative Analysis by ANOVA and Structural Equations Modeling

    OpenAIRE

    Majid Talebi; Ebrahim Hadavi; Nima Jaafari

    2014-01-01

    Foliar application of two levels of citric acid and malic acid (100 or 300 mg L−1) was investigated on flower stem height, plant height, flower performance and yield indices (fresh yield, dry yield and root to shoot ratio) of Gazania. Distilled water was applied as control treatment. Multivariate analysis revealed that while the experimental treatments had no significant effect on fresh weight and the flower count, the plant dry weight was significantly increased by 300 mg L−1 malic acid. Cit...

  9. Sensitivity Analysis of Composite Indicators through Mixed Model Anova

    OpenAIRE

    Cristina Davino, Rosaria Romano

    2011-01-01

    The paper proposes a new approach for analysing the stability of Composite Indicators. Starting from the consideration that different subjective choices occur in their construction, the paper emphasizes the importance of investigating the possible alternatives in order to have a clear and objective picture of the phenomenon under investigation. Methods dealing with Composite Indicator stability are known in literature as Sensitivity Analysis. In such a framework, the paper presents a new appr...

  10. Non-parametric kernel estimation for the ANOVA decomposition and sensitivity analysis

    International Nuclear Information System (INIS)

    In this paper, we consider the non-parametric estimation of the analysis of variance (ANOVA) decomposition, which is useful for applications in sensitivity analysis (SA) and in the more general emulation framework. Pursuing the point of view of the state-dependent parameter (SDP) estimation, the non-parametric kernel estimation (including high order kernel estimator) is built for those purposes. On the basis of the kernel technique, the asymptotic convergence rate is theoretically obtained for the estimator of sensitivity indices. It is shown that the kernel estimation can provide a faster convergence rate than the SDP estimation for both the ANOVA decomposition and the sensitivity indices. This would help one to get a more accurate estimation at a smaller computational cost

  11. One-way ANOVA based on interval information

    Science.gov (United States)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  12. Fatigue of NiTi SMA–pulley system using Taguchi and ANOVA

    Science.gov (United States)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel–titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA–pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA–pulley actuators and SMA actuators without pulley. Within the sample of SMA–pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA–pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA–pulley configuration. This novel observation will be applicable to many areas of SMA–pulley system applications development.

  13. Prediction and Control of Cutting Tool Vibration in Cnc Lathe with Anova and Ann

    Directory of Open Access Journals (Sweden)

    S. S. Abuthakeer

    2011-06-01

    Full Text Available Machining is a complex process in which many variables can deleterious the desired results. Among them, cutting tool vibration is the most critical phenomenon which influences dimensional precision of the components machined, functional behavior of the machine tools and life of the cutting tool. In a machining operation, the cutting tool vibrations are mainly influenced by cutting parameters like cutting speed, depth of cut and tool feed rate. In this work, the cutting tool vibrations are controlled using a damping pad made of Neoprene. Experiments were conducted in a CNC lathe where the tool holder is supported with and without damping pad. The cutting tool vibration signals were collected through a data acquisition system supported by LabVIEW software. To increase the buoyancy and reliability of the experiments, a full factorial experimental design was used. Experimental data collected were tested with analysis of variance (ANOVA to understand the influences of the cutting parameters. Empirical models have been developed using analysis of variance (ANOVA. Experimental studies and data analysis have been performed to validate the proposed damping system. Multilayer perceptron neural network model has been constructed with feed forward back-propagation algorithm using the acquired data. On the completion of the experimental test ANN is used to validate the results obtained and also to predict the behavior of the system under any cutting condition within the operating range. The onsite tests show that the proposed system reduces the vibration of cutting tool to a greater extend.

  14. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed and sel...

  15. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  16. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the...

  17. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  18. HEMOGLOBIN STATUS OBSERVED IN WOMEN OF AMRAVATI (MS) INDIA BY USING ANOVA TEST

    OpenAIRE

    Tantarpale V T,; Raksheskar A. Gracy

    2012-01-01

    In the present study examined the hemoglobin status in women of Amravati region and its statistical analysis by using ANOVA test. Total 298 women were tested for hemoglobin status. The one way ANOVA test was used to compare population groups, and analyzed hemoglobin %. The normal values of hemoglobin % were not observed in any age groups of total survey of women of Amravati region.

  19. Application of nuclear models

    International Nuclear Information System (INIS)

    The development of extensive experimental nuclear data base over the past three decades has been accompanied by parallel advancement of nuclear theory and models used to describe and interpret the measurements. This theoretical capability is important because of many nuclear data requirements that are still difficult, impractical, or even impossible to meet with present experimental techniques. Examples of such data needs are neutron cross sections for unstable fission products, which are required for neutron absorption corrections in reactor calculations; cross sections for transactinide nuclei that control production of long-lived nuclear wastes; and the extensive dosimetry, activation, and neutronic data requirements to 40 MeV that must accompany development of the Fusion Materials Irradation Test (FMIT) facility. In recent years systematic improvements have been made in the nuclear models and codes used in data evaluation and, most importantly, in the methods used to derive physically based parameters for model calculations. The newly issued ENDF/B-V evaluated data library relies in many cases on nuclear reaction theory based on compound-nucleus Hauser-Feshbach, preequilibrium and direct reaction mechanisms as well as spherical and deformed optical-model theories. The development and applications of nuclear models for data evaluation are discussed with emphasis on the 1 to 40 MeV neutron energy range

  20. Concrete fracture models and applications

    CERN Document Server

    Kumar, Shailendra

    2011-01-01

    Concrete-Fracture Models and Applications provides a basic introduction to nonlinear concrete fracture models. Readers will find a state-of-the-art review on various aspects of the material behavior and development of different concrete fracture models.

  1. Mathematical modeling with multidisciplinary applications

    CERN Document Server

    Yang, Xin-She

    2013-01-01

    Features mathematical modeling techniques and real-world processes with applications in diverse fields Mathematical Modeling with Multidisciplinary Applications details the interdisciplinary nature of mathematical modeling and numerical algorithms. The book combines a variety of applications from diverse fields to illustrate how the methods can be used to model physical processes, design new products, find solutions to challenging problems, and increase competitiveness in international markets. Written by leading scholars and international experts in the field, the

  2. Finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.

  3. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...

  4. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    Science.gov (United States)

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface. PMID:16929931

  5. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities. The...

  6. Multilevel Models Applications Using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readersto understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. Itis at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®.Examples are drawn from analysis of real-world research data.

  7. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... the steady state, distributed behaviour of a short-path evaporator....

  8. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina;

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor. T...

  9. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little ...

  10. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  11. Toughness Scaling Model Applications

    Czech Academy of Sciences Publication Activity Database

    Dlouhý, Ivo; Kozák, Vladislav; Holzmann, Miloslav

    78. Dordrecht : Kluwer Academic Publishers, 2002 - (Dlouhý, I.), s. 195-212 - (NATO Science Series. Mathematics, Physics and Chemistry. 2) R&D Projects: GA AV ČR IAA2041003; GA MŠk ME 303 Institutional research plan: CEZ:AV0Z2041904 Keywords : Fracture toughness transferability * pre cracked Charpyspecimen * toughness scaling models Subject RIV: JL - Materials Fatigue, Friction Mechanics

  12. Actuarial applications of financial models.

    OpenAIRE

    Goovaerts, Marc; Dhaene, Jan

    1996-01-01

    In the present contribution we indicate the type of situations seen from an insurance point of view, in which financial models serve as a basis for providing solutions to practical problems . In addition, some of the essential differences in the basic assumptions underlying financial models and actuarial applications are given.

  13. Geophysical Applications of Vegetation Modeling

    OpenAIRE

    J. O. Kaplan

    2001-01-01

    This thesis describes the development and selected applications of a global vegetation model, BIOME4. The model is applied to problems in high-latitude vegetation distribution and climate, trace gas production, and isotope biogeochemistry. It demonstrates how a modeling approach, based on principles of plant physiology and ecology, can be applied to interdisciplinary problems that cannot be adequately addressed by direct observations or experiments. The work is relevant to understanding the p...

  14. Sznajd Model and its Applications

    International Nuclear Information System (INIS)

    In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim ''United we Stand, Divided we Fall'' (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature. (author)

  15. Sznajd model and its applications

    CERN Document Server

    Sznajd-Weron, K

    2005-01-01

    In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim "United we Stand, Divided we Fall" (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature.

  16. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  17. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...

  18. Applications of Continuum Shell Model

    OpenAIRE

    Volya, Alexander

    2006-01-01

    The nuclear many-body problem at the limits of stability is considered in the framework of the Continuum Shell Model that allows a unified description of intrinsic structure and reactions. Technical details behind the method are highlighted and practical applications combining the reaction and structure pictures are presented.

  19. Quaternion applications for robot modeling

    Czech Academy of Sciences Publication Activity Database

    Ehrenberger, Zdeněk; Březina, Tomáš; Houška, P.

    Brno : VUT, 2002 - (Houfek, L.; Hlavoň, P.; Krejčí, P.), s. 1-8 ISBN 80-214-2109-6. [National conference with international participation Engineering Mechanics 2002. Svratka (CZ), 13.05.2002-16.05.2002] Institutional research plan: CEZ:AV0Z2076919 Keywords : robot * quaternion * modelling Subject RIV: JD - Computer Applications, Robotics

  20. Linear Regression and Anova Modelling Tool When Turning of EN 24 / EN 31 Alloy Steel

    OpenAIRE

    Deepak P; B.R. Narendra Babu

    2014-01-01

    In any machining process, apart from obtaining the accurate dimensions, achieving a good surface quality and maximized metal removal are also of utmost importance. A machining process involves many process parameters which directly or indirectly influence the surface roughness and metal removal rate of the product in common. Surface roughness and metal removal in turning process are varied due to various parameters like feed, speed and depth of cut are important ones. Extensiv...

  1. Environmental Applications of Geochemical Modeling

    Science.gov (United States)

    Zhu, Chen; Anderson, Greg

    2002-05-01

    This book discusses the application of geochemical models to environmental practice and studies, through the use of numerous case studies of real-world environmental problems, such as acid mine drainage, pit lake chemistry, nuclear waste disposal, and landfill leachates. In each example the authors clearly define the environmental threat in question; explain how geochemical modeling may help solve the problem posed; and advise the reader how to prepare input files for geochemical modeling codes and interpret the results in terms of meeting regulatory requirements.

  2. Analysis of Aluminium Nano Composites using Anova in CNC Machining Process

    Directory of Open Access Journals (Sweden)

    Maria Joe Christopher Poonthota Irudaya Raj

    2013-08-01

    Full Text Available The Objective of this work is to reinforce the Aluminum alloy with CNT by Stir Casting Method in different weight percentage of CNT was added to Aluminium separately to make composites and it physical and thermal properties have been investigated using test like tensile, hardness, Micro Structure and XRD. The improvement of mechanical, Physical and thermal properties for both the cases has been compared with pure aluminum. The TAGUCHI – ORTHOGONAL ARRAY experimental technique is used to optimize the machining parameters. The predicted surface roughness was estimated using S/N ratio and compared with actual values. ANOVA analysis is used to find the significant factors affecting the machining process in order to improve the surface characteristics of Al Material.

  3. Geophysical Model Applications for Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Pasyanos, M; Walter, W; Tkalcic, H; Franz, G; Gok, R; Rodgers, A

    2005-07-11

    Geophysical models constitute an important component of calibration for nuclear explosion monitoring. We will focus on four major topics and their applications: (1) surface wave models, (2) receiver function profiles, (3) regional tomography models, and (4) stochastic geophysical models. First, we continue to improve upon our surface wave model by adding more paths. This has allowed us to expand the region to all of Eurasia and into Africa, increase the resolution of our model, and extend results to even shorter periods (7 sec). High-resolution models exist for the Middle East and the YSKP region. The surface wave results can be inverted either alone, or in conjunction with other data, to derive models of the crust and upper mantle structure. One application of the group velocities is to construct phase-matched filters in combination with regional surface-wave magnitude formulas to improve the mb:Ms discriminant and extend it to smaller magnitude events. Next, we are using receiver functions, in joint inversions with the surface waves, to produce profiles directly under seismic stations throughout the region. In the past year, we have been focusing on deployments throughout the Middle East, including the Arabian Peninsula and Turkey. By assembling the results from many stations, we can see how regional seismic phases are affected by complicated upper mantle structure, including lithospheric thickness and anisotropy. The next geophysical model item, regional tomography models, can be used to predict regional travel times such as Pn and Sn. The times derived by the models can be used as a background model for empirical measurements or, where these don't exist, simply used as is. Finally, we have been exploring methodologies such as Markov Chain Monte Carlo (MCMC) to generate data-driven stochastic models. We have applied this technique to the YSKP region using surface wave dispersion data, body wave travel time data, receiver functions, and gravity data. The

  4. Geophysical data integration, stochastic simulation and significance analysis of groundwater responses using ANOVA in the Chicot Aquifer system, Louisiana, USA

    Science.gov (United States)

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.

    2008-01-01

    Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.

  5. Value added analysis and its distribution: a study on BOVESPA-listed banks using ANOVA

    Directory of Open Access Journals (Sweden)

    Leonardo José Seixas Pinto

    2013-05-01

    Full Text Available The value added generated by the financial institutions listed on BOVESPA and its distribution in the years between 2007 to 2011 are the subject of this research which shows how banks divided his wealth with the people, government, third parties and shareholders. Through the use of ANOVA test average in the companies that took part in this research concluded that: (a the average value added of foreign banks differs from national banks. (b The remuneration policy of equity foreign banks differs from national banks. (c The policy of distribution of value added to employees of foreign banks Santander and HSBC differs from the other banks. (d Taxes paid to the government have equal means with the exception of Santander. (e Although curious, Banco Itau and Banco do Brazil is equal in all analyzes in the distribution of value added since it is a private and one public. It appears this way a policy unequal distribution of wealth generation and foreign banks compared with the national public and private banks.

  6. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  7. Conceptual Model of User Adaptive Enterprise Application

    Directory of Open Access Journals (Sweden)

    Inese Šūpulniece

    2015-07-01

    Full Text Available The user adaptive enterprise application is a software system, which adapts its behavior to an individual user on the basis of nontrivial inferences from information about the user. The objective of this paper is to elaborate a conceptual model of the user adaptive enterprise applications. In order to conceptualize the user adaptive enterprise applications, their main characteristics are analyzed, the meta-model defining the key concepts relevant to these applications is developed, and the user adaptive enterprise application and its components are defined in terms of the meta-model. Modeling of the user adaptive enterprise application incorporates aspects of enterprise modeling, application modeling, and design of adaptive characteristics of the application. The end-user and her expectations are identified as two concepts of major importance not sufficiently explored in the existing research. Understanding these roles improves the adaptation result in the user adaptive applications.

  8. Investigation of flood pattern using ANOVA statistic and remote sensing in Malaysia

    International Nuclear Information System (INIS)

    Flood is an overflow or inundation that comes from river or other body of water and causes or threatens damages. In Malaysia, there are no formal categorization of flood but often broadly categorized as monsoonal, flash or tidal floods. This project will be focus on flood causes by monsoon. For the last few years, the number of extreme flood was occurred and brings great economic impact. The extreme weather pattern is the main sector contributes for this phenomenon. In 2010, several districts in the states of Kedah neighbour-hoods state have been hit by floods and it is caused by tremendous weather pattern. During this tragedy, the ratio of the rainfalls volume was not fixed for every region, and the flood happened when the amount of water increase rapidly and start to overflow. This is the main objective why this project has been carried out, and the analysis data has been done from August until October in 2010. The investigation was done to find the possibility correlation pattern parameters related to the flood. ANOVA statistic was used to calculate the percentage of parameters was involved and Regression and correlation calculate the strength of coefficient among parameters related to the flood while remote sensing image was used for validation between the calculation accuracy. According to the results, the prediction is successful as the coefficient of relation in flood event is 0.912 and proved by Terra-SAR image on 4th November 2010. The rates of change in weather pattern give the impact to the flood

  9. Investigation of flood pattern using ANOVA statistic and remote sensing in Malaysia

    Science.gov (United States)

    Ya'acob, Norsuzila; Syazwani Ismail, Nor; Mustafa, Norfazira; Laily Yusof, Azita

    2014-06-01

    Flood is an overflow or inundation that comes from river or other body of water and causes or threatens damages. In Malaysia, there are no formal categorization of flood but often broadly categorized as monsoonal, flash or tidal floods. This project will be focus on flood causes by monsoon. For the last few years, the number of extreme flood was occurred and brings great economic impact. The extreme weather pattern is the main sector contributes for this phenomenon. In 2010, several districts in the states of Kedah neighbour-hoods state have been hit by floods and it is caused by tremendous weather pattern. During this tragedy, the ratio of the rainfalls volume was not fixed for every region, and the flood happened when the amount of water increase rapidly and start to overflow. This is the main objective why this project has been carried out, and the analysis data has been done from August until October in 2010. The investigation was done to find the possibility correlation pattern parameters related to the flood. ANOVA statistic was used to calculate the percentage of parameters was involved and Regression and correlation calculate the strength of coefficient among parameters related to the flood while remote sensing image was used for validation between the calculation accuracy. According to the results, the prediction is successful as the coefficient of relation in flood event is 0.912 and proved by Terra-SAR image on 4th November 2010. The rates of change in weather pattern give the impact to the flood.

  10. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    SABAN OZER; HASAN ZORLU; SELCUK METE

    2016-06-01

    Generally, memoryless polynomial nonlinear model for nonlinear part and finite impulse response (FIR) model or infinite impulse response model for linear part are preferred in Hammerstein models in literature. In this paper, system identification applications of Hammerstein model that is cascade of nonlinear second order volterra and linear FIR model are studied. Recursive least square algorithm is used to identify the proposed Hammerstein model parameters. Furthermore, the results are compared to identify the success of proposed Hammerstein model and different types of models

  11. A Unified ASrchitecture Model of Web Applications

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    With the increasing popularity,scale and complexity of web applications,design and development of web applications are becoming more and more difficult,However,the current state of their design and development is characterized by anarchy and ad hoc methodologies,One of the causes of this chaotic situation is that different researchers and designers have different understanding of web applications.In this paper,based on an explicit understanding of web applications,we present a unified architecture model of wed applications,the four-view model,which addresses the analysis and design issues of web applications from four perspectives,namely,logical view,data view,navigation view and presentation view,each addrssing a specific set of concerns of web applications,the purpose of the model is to provide a clear picture of web applications to alleviate the chaotic situation and facilitate its analysis,design and implementation.

  12. Application of numerical models and codes

    OpenAIRE

    Vyzikas, Thomas

    2014-01-01

    This report indicates the importance of numerical modelling in the modelling process, gradually builds the essential background theory in the fields of fluid mechanics, wave mechanics and numerical modelling, discusses a list of commonly used software and finally recommends which models are more suitable for different engineering applications in a marine renewable energy project.

  13. Homogeneity tests for variances and mean test under heterogeneity conditions in a single way ANOVA method

    International Nuclear Information System (INIS)

    If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)

  14. Bubble models, data acquisition and model applicability

    Czech Academy of Sciences Publication Activity Database

    Jebavá, Marcela; Kloužek, Jaroslav; Němec, Lubomír

    Vsetín : GLASS SERVICE ,INC, 2005, s. 182-191. ISBN 80-239-4687-0. [International Seminar on Mathematical Modeling and Advanced Numerical Methods in Furnace Design and Operation /8./. Velké Karlovice (CZ), 19.05.2005-20.05.2005] Institutional research plan: CEZ:AV0Z40320502 Keywords : bubble models Subject RIV: CA - Inorganic Chemistry

  15. Assessment of the dye removal capability of submersed aquatic plants in a laboratory-scale wetland system using anova

    Directory of Open Access Journals (Sweden)

    O. Keskinkan

    2007-06-01

    Full Text Available The textile dye (Basic Blue 41(BB41 removal capability of a laboratory-scale wetland system was presented in this study. Twenty glass aquaria were used to establish the wetland. Myriophyllum spicatum and Ceratophyllum demersum were planted in the aquaria and acclimated. After establishing flow conditions, the aquaria were fed with synthetic wastewaters containing BB41. The concentration of the dye was adjusted to 11.0 mg/L in the synthetic wastewater. Hydraulic retention times (HRTs ranged between 3 and 18 days. Effective HRTs were 9 and 18 days. The highest dye removal rates were 94.8 and 94.1% for M. spicatum and C. demersum aquaria respectively. The statistical ANOVA method was used to assess the dye removal capability of the wetland system. In all cases the ANOVA method revealed that plants in the wetland system and HRT were important factors and the wetland system was able to remove the dye from influent wastewater.

  16. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  17. Business model concept and application

    OpenAIRE

    Ogonowska, Kinga

    2010-01-01

    In this thesis I would like to clarify the major approached to business models, define business model innovation, identify types of business models and innovations that are applied in the companies under research, indicate strengths and weaknesses of the business models studied and determine their innovative value. The sources of data include secondary from literature review, reports, corporate web pages and primary data from the interviews with employees of the Polish companies under ...

  18. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA)

    OpenAIRE

    Rakesh,; Satish Kumar

    2015-01-01

    The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG) of Mild Steel 1018 grade by Analysis of Variance (ANOVA). The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which c...

  19. Association models for petroleum applications

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios

    2013-01-01

    thermodynamic models like cubic equations of state have been the dominating tools in the petroleum industry, the focus of this review is on the association models. Association models are defined as the models of SAFT/CPA family (and others) which incorporate hydrogen bonding and other complex interactions. Such...... association models have been, especially over the last 20 years, proved to be very successful in predicting many thermodynamic properties in the oil & gas industry. They have not so far replaced cubic equations of state, but the results obtained by using these models are very impressive in many cases, e.......g., for gas hydrate related systems, CO2/H2S mixtures, water/hydrocarbons and others. This review highlights both the major advantages of these association models and some of their limitations, which we believe should be discussed in the future....

  20. PEM Fuel Cells - Fundamentals, Modeling and Applications

    OpenAIRE

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  1. PEM Fuel Cells - Fundamentals, Modeling and Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  2. Photocell modelling for thermophotovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Mayor, J.-C.; Durisch, W.; Grob, B.; Panitz, J.-C. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Goal of the modelling described here is the extrapolation of the performance characteristics of solar photocells to TPV working conditions. The model accounts for higher flux of radiation and for the higher temperatures reached in TPV converters. (author) 4 figs., 1 tab., 2 refs.

  3. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  4. Dual Security Testing Model for Web Applications

    Directory of Open Access Journals (Sweden)

    Singh Garima

    2016-02-01

    Full Text Available In recent years, web applications have evolved from small websites into large multi-tiered applications. The quality of web applications depends on the richness of contents, well structured navigation and most importantly its security. Web application testing is a new field of research so as to ensure the consistency and quality of web applications. In the last ten years there have been different approaches. Models have been developed for testing web applications but only a few focused on content testing, a few on navigation testing and a very few on security testing of web applications. There is a need to test content, navigation and security of an application in one go. The objective of this paper is to propose Dual Security Testing Model to test the security of web applications using UML modeling technique which includes web socket interface. In this research paper we have described how our security testing model is implemented using activity diagram, activity graph and based on this how test cases is generated.

  5. A model for assessment of telemedicine applications

    DEFF Research Database (Denmark)

    Kidholm, Kristian; Ekeland, Anne Granstrøm; Jensen, Lise Kvistgaard;

    2012-01-01

    Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...... the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study....

  6. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  7. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  8. Model Driven Development of Distributed Business Applications

    OpenAIRE

    Goerigk, Wolfgang

    2011-01-01

    The present paper presents a model driven generative approach to the design and implementation of destributed business applications, which consequently and systematically implements many years of MDSD experience for the software engineering of large application development projects in an industrial context.

  9. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  10. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  11. An Application on Multinomial Logistic Regression Model

    OpenAIRE

    Abdalla M El-Habil

    2012-01-01

    This study aims to identify an application of Multinomial Logistic Regression model which is one of the important methods for categorical data analysis. This model deals with one nominal/ordinal response variable that has more than two categories, whether nominal or ordinal variable. This model has been applied in data analysis in many areas, for example health, social, behavioral, and educational.To identify the model by practical way, we used real data on physical violence against children...

  12. Distance Education Instructional Model Applications.

    Science.gov (United States)

    Jackman, Diane H.; Swan, Michael K.

    1995-01-01

    A survey of graduate students involved in distance education on North Dakota State University's Interactive Video Network included 80 on campus and 13 off. The instructional models rated most effective were role playing, simulation, jurisprudential (Socratic method), memorization, synectics, and inquiry. Direct instruction was rated least…

  13. THE CONNECTION IDENTIFICATION BETWEEN THE NET INVESTMENTS IN HOTELS AND RESTORANTS AND TOURISTIC ACCOMODATION CAPACITY BY USING THE ANOVA METHOD

    Directory of Open Access Journals (Sweden)

    Elena STAN

    2009-12-01

    Full Text Available In the purpose of giving the answers to customers’ harsh exigencies, in the Romanian tourism development hasto be taking into account especially the “accommodation” component. The dimension of technical and material base ofaccommodation can be express through: units’ number, rooms’ number, places number. The most used is “placesnumber” indicator. Nowadays as regarding the tourism Romanian investments there are special concerns caused bypeculiar determinations. The study aim is represented by identifying of a connection existence between net investmentsin hotels and restaurants and tourism accommodation capacity, registered among 2002 -2007period in Romania, byusing the dispersion analysis ANOVA method.

  14. Monotonicity Properties of Dirichlet Integrals with Applications to the Multinomial Distribution and the Anova Test; A Draft.

    Science.gov (United States)

    Olkin, Ingram

    Bounds for the tails of Dirichlet integrals are established by showing that each integral as a function of the limits is a Schur function. In particular, it is shown how these bounds apply to the simultaneous analysis of variance test and to the multinomial distribution. (Author)

  15. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  16. Thermoviscoplastic model with application to copper

    Science.gov (United States)

    Freed, Alan D.

    1988-01-01

    A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.

  17. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may...... affect the individual in almost any thinkable way. This project focuses on measuring the eects on sleep in both humans and animals. The sleep process is usually analyzed by categorizing small time segments into a number of sleep states and this can be modelled using a Markov process. For this purpose new...

  18. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  19. A Novel Feature Selection Based on One-Way ANOVA F-Test for E-Mail Spam Classification

    Directory of Open Access Journals (Sweden)

    Nadir Omer Fadl Elssied

    2014-01-01

    Full Text Available Spam is commonly defined as unwanted e-mails and it became a global threat against e-mail users. Although, Support Vector Machine (SVM has been commonly used in e-mail spam classification, yet the problem of high data dimensionality of the feature space due to the massive number of e-mail dataset and features still exist. To improve the limitation of SVM, reduce the computational complexity (efficiency and enhancing the classification accuracy (effectiveness. In this study, feature selection based on one-way ANOVA F-test statistics scheme was applied to determine the most important features contributing to e-mail spam classification. This feature selection based on one-way ANOVA F-test is used to reduce the high data dimensionality of the feature space before the classification process. The experiment of the proposed scheme was carried out using spam base well-known benchmarking dataset to evaluate the feasibility of the proposed method. The comparison is achieved for different datasets, categorization algorithm and success measures. In addition, experimental results on spam base English datasets showed that the enhanced SVM (FSSVM significantly outperforms SVM and many other recent spam classification methods for English dataset in terms of computational complexity and dimension reduction.

  20. Degenerate RFID Channel Modeling for Positioning Applications

    Directory of Open Access Journals (Sweden)

    A. Povalac

    2012-12-01

    Full Text Available This paper introduces the theory of channel modeling for positioning applications in UHF RFID. It explains basic parameters for channel characterization from both the narrowband and wideband point of view. More details are given about ranging and direction finding. Finally, several positioning scenarios are analyzed with developed channel models. All the described models use a degenerate channel, i.e. combined signal propagation from the transmitter to the tag and from the tag to the receiver.

  1. Application of Substitutional Model in Oxide Systems

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The application of substitutional model in oxide systems, in comparison with that of sublattice model, is discussed.The results show that in the case of crystalline phases and liquid phases without molecular-like associates or theshortage of element in sublattice, these two models get consistent in the description of the formalism of Gibbs freeenergies of phases and obtain the same result of phase diagram calculation when the valence of the cations keep thesame.

  2. Financial Applications of Copula-Models

    OpenAIRE

    Penikas, H.

    2010-01-01

    The paper aims at introducing copula-models' concepts and its application to solving such financials programs as risk measurement, risk hedging, portfolio optimization, derivatives pricing and duration models evaluation. For the purpose the copula definition is firstly introduced. Then different copula families, model estimation and inference techniques are discussed. A detailed review of relevant literature is provided. Finally the unresolved issues are presented that might well become the s...

  3. Application of the Pareto Principle in Rapid Application Development Model

    Directory of Open Access Journals (Sweden)

    Vishal Pandey

    2013-06-01

    Full Text Available the Pareto principle or most popularly termed as the 80/20 rule is one of the well-known theories in the field of economics. This rule of thumb was named after the great economist Vilferdo Pareto. The Pareto principle was proposed by a renowned management consultant Joseph M Juran. The rule states that 80% of the required work can be completed in 20% of the time allotted. The idea is to apply this rule of thumb in the Rapid Application Development (RAD Process model of software engineering. The Rapid application development model integrates end-user in the development using iterative prototyping emphasizing on delivering a series of fully functional prototype to designated user experts. During the application of Pareto Principle the other concepts like the Pareto indifference curve and Pareto efficiency also come into the picture. This enables the development team to invest major amount of time focusing on the major functionalities of the project as per the requirement prioritizationof the customer. The paper involves an extensive study on different unsatisfactory projects in terms of time and financial resources and the reasons of failures are analyzed. Based on the possible reasons offailure, a customized RAD model is proposed integrating the 80/20 rule and advanced software development strategies to develop and deploy excellent quality software product in minimum time duration. The proposed methodology is such that its application will directly affect the quality of the end product for the better.

  4. FGDPRISM, EPRI's FGD process model - recent applications

    International Nuclear Information System (INIS)

    Version 1.0 of EPRI's FGD computer simulation model, FGDPRISM (Flue Gas Desulfurization PRocess Integration and Simulation Model) was released in April 1991, and an update, Version 1.1, was released in October 1991. This paper briefly describes the FGDPRISM computer model and its current and potential uses. The emphasis of the paper, however, is on two recent applications. The first is the calibration of the model using test data from LG ampersand E's Mill Creek Unit 3 FGD system, and the subsequent use for redesign of their Unit 4 FGD system absorber. The second application is an analysis of laboratory- and pilot-scale data to examine the model's accuracy in predicting the effects of chlorides on SO2 removal. Finally, the future direction of the FGDPRISM development effort is discussed

  5. Review of models applicable to accident aerosols

    International Nuclear Information System (INIS)

    Estimations of potential airborne-particle releases are essential in safety assessments of nuclear-fuel facilities. This report is a review of aerosol behavior models that have potential applications for predicting aerosol characteristics in compartments containing accident-generated aerosol sources. Such characterization of the accident-generated aerosols is a necessary step toward estimating their eventual release in any accident scenario. Existing aerosol models can predict the size distribution, concentration, and composition of aerosols as they are acted on by ventilation, diffusion, gravity, coagulation, and other phenomena. Models developed in the fields of fluid mechanics, indoor air pollution, and nuclear-reactor accidents are reviewed with this nuclear fuel facility application in mind. The various capabilities of modeling aerosol behavior are tabulated and discussed, and recommendations are made for applying the models to problems of differing complexity

  6. Parallel Computing Applications and Financial Modelling

    OpenAIRE

    Liddell, Heather M.; Parkinson, D.; Hodgson, G S; Dzwig, P.

    2004-01-01

    At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs). The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are describ...

  7. Benchmark of tyre models for mechatronic application

    OpenAIRE

    Carulla Castellví, Marina

    2010-01-01

    In this paper a comparison matrix is developed in order to examine three tyre models through nine criteria. These criteria are obtained after the requirements study of the main vehicle-dynamics mechatronic applications, such as ABS, ESP, TCS and EPAS. The present study proposes a weight for each criterion related to its importance to the mentioned applications. These weights are obtained by taking into account both practical and theoretical judgement. The former was collected through experts‟...

  8. Models of organometallic complexes for optoelectronic applications

    CERN Document Server

    Jacko, A C; Powell, B J

    2010-01-01

    Organometallic complexes have potential applications as the optically active components of organic light emitting diodes (OLEDs) and organic photovoltaics (OPV). Development of more effective complexes may be aided by understanding their excited state properties. Here we discuss two key theoretical approaches to investigate these complexes: first principles atomistic models and effective Hamiltonian models. We review applications of these methods, such as, determining the nature of the emitting state, predicting the fraction of injected charges that form triplet excitations, and explaining the sensitivity of device performance to small changes in the molecular structure of the organometallic complexes.

  9. Application Note: Power Grid Modeling With Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  10. Multimodal nuclear fission model and its application

    International Nuclear Information System (INIS)

    As the nuclear fission models, the following are explained: random-neck rupture model; nuclear fission channel theory; breakpoint model, especially breakpoint model by Wilkins et al.; and multimodal random-neck rupture model. In addition, the prompt neutron spectrum analysis of multimodal model, and the application to the energy-dependent analysis of delayed neutron yield are also described. In the random-neck fracture model proposed by S. L. Whetstone, a nucleus has a form like 'elongated gourd' just before the rupture, and the mass distribution is determined by the part of the neck where cleavage occurs. The division of mass and charge in nuclear fission, according to the nuclear fission channel theory, is considered to be determined by which transition state the saddle point of fission barrier is passed through. On the other hand, the model, where the deformation of nucleus further proceeds and the division is determined by the breakpoint just before the division to two fissure pieces, is called the breakpoint model. The multimodal nuclear fission model is the concept to consider that there are several deformation channels for nucleus, and that each of them leads to a different rupture state. The model that combines the random-neck rapture model and multimodal fission model is the multimodal random-neck rupture model. (J.P.N.)

  11. Online Scene Modeling for Interactive AR Applications

    OpenAIRE

    Yoo, Jaesang; Cho, Kyusung; Jung, Jinki; Yang, Hyun S.

    2010-01-01

    Augmented reality applications require 3D model of environment to provide even more realistic experience. Unfortunately, however, most of researches on 3D modeling have been restricted to an offline process up to now, which conflicts with characteristics of AR such as realtime and online experience. In addition, it is barely possible not only to generate 3D model of whole world in advance but also trasfer the burden of 3D model generation to a user, which limits the usability of AR. Thus, it ...

  12. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Rakesh

    2015-11-01

    Full Text Available The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG of Mild Steel 1018 grade by Analysis of Variance (ANOVA. The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which consist three input parameters. Root gap has greatest effect on tensile strength followed by welding current and arc voltage. Arc voltage has greatest effect on hardness followed by root gap and welding current. Weld metal consists of fine grains of ferrite and pearlite.

  13. Nonnegative Matrix Factorization: Model, Algorithms and Applications

    OpenAIRE

    Zhang, Xiang-Sun; Zhang, Zhong-Yuan

    2013-01-01

    Nonnegative Matrix Factorization (NMF) is becoming one of the most popular models in data mining society recently. NMF can extract hidden patterns from a series of high-dimensional vectors automatically, and has been applied for dimensional reduction, unsupervised learning (image processing, clustering and co-clustering, etc.) and prediction successfully. This paper surveys NMF in terms of the research history, model formulation, algorithms and applications. In summary, NMF has good interpret...

  14. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  15. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  16. Stability and adaptability analysis of rice cultivars using environment-centered yield in two-way ANOVA model

    Directory of Open Access Journals (Sweden)

    D. Sumith De. Z. Abeysiriwardena

    2011-12-01

    Full Text Available Identification of rice varieties with wider adaptability and stability are the important aspects in varietal recommendation to achieve better economic benefits for farmers. Multi locational trails are conducted in different locations / seasons to test and identify the consistently performing varieties in wider environments and location specific high performing varieties. The interaction aspect of varieties with environment is complex and highly variable across locations. Thus, the identifying varieties under these circumstances are difficult for varietal recommendations. However, several methods have been proposed in the recent past with the complex computation requirements. But, the aid of statistical software and other programs capabilities ease the complexity to a large extent. In this study, we employed one of the established techniques called variance component analysis (VCA to make the varietal recommendation for wider adaptability for many varying environments and the location specific recommendations. In this method variety × environment interaction is portioned into components for individual varieties using yield deviation approach. The average effect of variety (environment centered yield deviation - Dk and the stability measure of each variety (variety interaction variance -Sk2 are used make the recommendations. The rice yield data of cultivars of three month maturity duration, cultivated across diverse environments during the 2002/03 wet–season in Sri Lanka was analyzed for making recommendations. Based on the results the variety At581 gave the highest D2ksk value with wide adaptability selected for general recommendation. Varieties Bg305 and At303 also had relatively higher Dk and thus these two can also be selected for general cultivation purpose.

  17. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  18. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  19. Application of RBAC Model in System Kernel

    Directory of Open Access Journals (Sweden)

    Guan Keqing

    2012-11-01

    Full Text Available In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We built an access control model of system’s process, and proposed an implement framework. And the methods of implementation of the model for operation systems were illustrated.

  20. Solar-terrestrial models and application software

    Science.gov (United States)

    Bilitza, D.

    1992-01-01

    The empirical models related to solar-terrestrial sciences are listed and described which are available in the form of computer programs. Also included are programs that use one or more of these models for application specific purposes. The entries are grouped according to the region of their solar-terrestrial environment to which they belong and according to the parameter which they describe. Regions considered include the ionosphere, atmosphere, magnetosphere, planets, interplanetary space, and heliosphere. Also provided is the information on the accessibility for solar-terrestrial models to specify the magnetic and solar activity conditions.

  1. GSTARS computer models and their applications, Part II: Applications

    Science.gov (United States)

    Simoes, F.J.M.; Yang, C.T.

    2008-01-01

    In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  2. Numerical models: realization and applications. Circulatory system models

    OpenAIRE

    Ferrari, Gianfranco; Lazzari, Claudio,; Zielinski, Krzysztof; Fresiello, Libera; Palko, Krzysztof Jakub

    2010-01-01

    This chapter illustrates the basic structure, the organization and some examples of digital computer circulatory models applications. A special attention is given to the realization of graphical user interfaces and to choice of software platforms. Mechanical circulatory assistance is treated giving two examples where it is represented with two different approaches: representing the physical device or its functional aspects. The parallel LVAD assistance is simulated modeling the pneumatic vent...

  3. Parallel Computing Applications and Financial Modelling

    Directory of Open Access Journals (Sweden)

    Heather M. Liddell

    2004-01-01

    Full Text Available At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs. The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are described in this paper, namely Portfolio Optimisation and Financial Risk Assessment.

  4. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  5. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  6. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  7. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  8. Systems Evaluation Methods, Models, and Applications

    CERN Document Server

    Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing

    2011-01-01

    A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique

  9. Application software development via model based design

    OpenAIRE

    Haapala, O. (Olli)

    2015-01-01

    This thesis was set to study the utilization of the MathWorks’ Simulink® program in model based application software development and its compatibility with the Vacon 100 inverter. The target was to identify all the problems related to everyday usage of this method and create a white paper of how to execute a model based design to create a Vacon 100 compatible system software. Before this thesis was started, there was very little knowledge of the compatibility of this method. However durin...

  10. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  11. Managing Event Information Modeling, Retrieval, and Applications

    CERN Document Server

    Gupta, Amarnath

    2011-01-01

    With the proliferation of citizen reporting, smart mobile devices, and social media, an increasing number of people are beginning to generate information about events they observe and participate in. A significant fraction of this information contains multimedia data to share the experience with their audience. A systematic information modeling and management framework is necessary to capture this widely heterogeneous, schemaless, potentially humongous information produced by many different people. This book is an attempt to examine the modeling, storage, querying, and applications of such an

  12. Review of Geomechanical Application in Reservoir Modeling

    OpenAIRE

    Mahmood Bataee; Sonny Irawan

    2014-01-01

    This study has reviewed the geomechanical considerations and applications in reservoir modeling. Geomechanical studies are applied in the reservoir to establish some features as field subsidence/inflation and stability. The reservoir stress alters with the change in the pressure and temperature either by production or EOR injection/thermal methods. The field subsidence/inflation can damage surface facilities. The change in field new stress state could lead ...

  13. Application of RBAC Model in System Kernel

    OpenAIRE

    Guan Keqing; Li Hongxin; Kong Xianli

    2012-01-01

    In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We bu...

  14. Chapter 5: Summary of model application

    International Nuclear Information System (INIS)

    This chapter provides a brief summary of the model applications described in Volume III of the Final Report. This chapter dealt with the selected water management regimes; ground water flow regimes; agriculture; ground water quality; hydrodynamics, sediment transport and water quality in the Danube; hydrodynamics, sediment transport and water quality in the river branch system; hydrodynamics, sediment transport and water quality in the Hrusov reservoir and with ecology in this Danube area

  15. Optimization of tensile strength of friction welded AISI 1040 and AISI 304L steels according to statistics analysis (ANOVA)

    Energy Technology Data Exchange (ETDEWEB)

    Kirik, Ihsan [Batman Univ. (Turkey); Ozdemir, Niyazi; Firat, Emrah Hanifi; Caligulu, Ugur [Firat Univ., Elazig (Turkey)

    2013-06-01

    Materials difficult to weld by fusion welding processes can be successfully welded by friction welding. The strength of the friction welded joints is extremely affected by process parameters (rotation speed, friction time, friction pressure, forging time, and forging pressure). In this study, statistical values of tensile strength were investigated in terms of rotation speed, friction time, and friction pressure on the strength behaviours of friction welded AISI 1040 and AISI 304L alloys. Then, the tensile test results were analyzed by analysis of variance (ANOVA) with a confidence level of 95 % to find out whether a statistically significant difference occurs. As a result of this study, the maximum tensile strength is very close, which that of AISI 1040 parent metal of 637 MPa to could be obtained for the joints fabricated under the welding conditions of rotation speed of 1700 rpm, friction pressure of 50 MPa, forging pressure of 100 MPa, friction time of 4 s, and forging time of 2 s. Rotation speed, friction time, and friction pressure on the friction welding of AISI 1040 and AISI 304L alloys were statistically significant regarding tensile strength test values. (orig.)

  16. Process Parameter Optimization of WEDM for AISI M2 & AISI H13 by Anova & Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Rajkamal Singh Banga

    2014-10-01

    Full Text Available WEDM is a widely recognized unconventional material cutting process used to manufacture components with complex shapes and profiles of hard materials. In this thermal erosion process, there is no physical contact between the wire tool and work materials. AISI M2 and AISI H13 materials are taken for studyand molybdenum wire electrode diameter (0.18mm; experiment is conducted according to Taguchi‟s L16 OA, with input parameters as Peak current, Pulse on, Pulse off their response on MRR, Surface Roughness, Kerf width & Spark Gap is analysed to check the significance of each using ANOVA. Process parameter optimization is done by Analytic Hierarchy Process with the criteria Maximum MRR, minimum kerf and surface roughness. It is observed that for material AISI M2 at low value of peak current (1 A, pulse off (20µs and pulse on (30µs we can minimize surface roughness (3.30µm, kerf width (0.195 mm and maximize MRR (0.022 g/min,from the selected levels whereas for material AISI H13 Peak current (1A, Pulse On (40µs and high Pulse Off (30µs we get better Surface roughness (3.71 µm, kerf width (0.196mm and maximum MRR (0.020g/min, from the selected levels.

  17. The natural emissions model (NEMO): Description, application and model evaluation

    Science.gov (United States)

    Liora, Natalia; Markakis, Konstantinos; Poupkou, Anastasia; Giannaros, Theodore M.; Melas, Dimitrios

    2015-12-01

    The aim of this study is the application and evaluation of a new computer model used for the quantification of emissions coming from natural sources. The Natural Emissions Model (NEMO) is driven by the meteorological data of the mesoscale numerical Weather Research and Forecasting (WRF) model and it estimates particulate matter (PM) emissions from windblown dust, sea salt aerosols (SSA) and primary biological aerosol particles (PBAPs). It also includes emissions from Biogenic Volatile Organic Compounds (BVOCs) from vegetation; however, this study focuses only on particle emissions. An application and evaluation of NEMO at European scale are presented. NEMO and the modelling system consisted of WRF model and the Comprehensive Air Quality Model with extensions (CAMx) were applied in a 30 km European domain for the year 2009. The computed domain-wide annual PM10 emissions from windblown dust, sea salt and PBAPs were 0.57 Tg, 20 Tg and 0.12 Tg, respectively. PM2.5 represented 6% and 33% of emitted windblown dust and sea salt, respectively. Natural emissions are characterized by high geographical and seasonal variations; windblown dust emissions were the highest during summer in the southern Europe and SSA production was the highest in Atlantic Ocean during the cold season while in Mediterranean Sea the highest SSA emissions were found over the Aegean Sea during summer. Modelled concentrations were compared with surface station measurements and showed that the model captured fairly well the contribution of the natural sources to PM levels over Europe. Dust concentrations correlated better when dust transport events from Sahara desert were absent while the simulation of sea salt episodes led to an improvement of model performance during the cold season.

  18. Intelligent Model for Traffic Safety Applications

    Directory of Open Access Journals (Sweden)

    C. Chellappan

    2012-01-01

    Full Text Available Problem statement: This study presents an analysis on road traffic system focused on the use of communications to detect dangerous vehicles on roads and highways and how it could be used to enhance driver safety. Approach: The intelligent traffic safety application model is based on all traffic flow theories developed in the last years, leading to reliable representations of road traffic, which is of major importance in achieving the attenuation of traffic problems. The model also includes the decision making process from the driver in accelerating, decelerating and changing lanes. Results: The individuality of each of these processes appears from the model parameters that are randomly generated from statistical distributions introduced as input parameters. Conclusion: This allows the integration of the individuality factor of the population elements yielding knowledge on various driving modes at wide variety of situations.

  19. Model-based vision for space applications

    Science.gov (United States)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  20. Determining Application Runtimes Using Queueing Network Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, M

    2007-03-15

    Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.

  1. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  2. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  3. Application of Chaos Theory to Psychological Models

    Science.gov (United States)

    Blackerby, Rae Fortunato

    This dissertation shows that an alternative theoretical approach from physics--chaos theory--offers a viable basis for improved understanding of human beings and their behavior. Chaos theory provides achievable frameworks for potential identification, assessment, and adjustment of human behavior patterns. Most current psychological models fail to address the metaphysical conditions inherent in the human system, thus bringing deep errors to psychological practice and empirical research. Freudian, Jungian and behavioristic perspectives are inadequate psychological models because they assume, either implicitly or explicitly, that the human psychological system is a closed, linear system. On the other hand, Adlerian models that require open systems are likely to be empirically tenable. Logically, models will hold only if the model's assumptions hold. The innovative application of chaotic dynamics to psychological behavior is a promising theoretical development because the application asserts that human systems are open, nonlinear and self-organizing. Chaotic dynamics use nonlinear mathematical relationships among factors that influence human systems. This dissertation explores these mathematical relationships in the context of a sample model of moral behavior using simulated data. Mathematical equations with nonlinear feedback loops describe chaotic systems. Feedback loops govern the equations' value in subsequent calculation iterations. For example, changes in moral behavior are affected by an individual's own self-centeredness, family and community influences, and previous moral behavior choices that feed back to influence future choices. When applying these factors to the chaos equations, the model behaves like other chaotic systems. For example, changes in moral behavior fluctuate in regular patterns, as determined by the values of the individual, family and community factors. In some cases, these fluctuations converge to one value; in other cases, they diverge in

  4. An Application on Multinomial Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Abdalla M El-Habil

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE This study aims to identify an application of Multinomial Logistic Regression model which is one of the important methods for categorical data analysis. This model deals with one nominal/ordinal response variable that has more than two categories, whether nominal or ordinal variable. This model has been applied in data analysis in many areas, for example health, social, behavioral, and educational.To identify the model by practical way, we used real data on physical violence against children, from a survey of Youth 2003 which was conducted by Palestinian Central Bureau of Statistics (PCBS. Segment of the population of children in the age group (10-14 years for residents in Gaza governorate, size of 66,935 had been selected, and the response variable consisted of four categories. Eighteen of explanatory variables were used for building the primary multinomial logistic regression model. Model had been tested through a set of statistical tests to ensure its appropriateness for the data. Also the model had been tested by selecting randomly of two observations of the data used to predict the position of each observation in any classified group it can be, by knowing the values of the explanatory variables used. We concluded by using the multinomial logistic regression model that we can able to define accurately the relationship between the group of explanatory variables and the response variable, identify the effect of each of the variables, and we can predict the classification of any individual case.

  5. Hidden Markov models applications in computer vision

    CERN Document Server

    Bunke, H

    2001-01-01

    Hidden Markov models (HMMs) originally emerged in the domain of speech recognition. In recent years, they have attracted growing interest in the area of computer vision as well. This book is a collection of articles on new developments in the theory of HMMs and their application in computer vision. It addresses topics such as handwriting recognition, shape recognition, face and gesture recognition, tracking, and image database retrieval.This book is also published as a special issue of the International Journal of Pattern Recognition and Artificial Intelligence (February 2001).

  6. Web Application for Modeling Global Antineutrinos

    CERN Document Server

    Barna, Andrew

    2015-01-01

    Electron antineutrinos stream freely from rapidly decaying fission products within nuclear reactors and from long-lived radioactivity within Earth. Those with energy greater than 1.8 MeV are regularly observed by several kiloton-scale underground detectors. These observations estimate the amount of terrestrial radiogenic heating, monitor the operation of nuclear reactors, and measure the fundamental properties of neutrinos. The analysis of antineutrino observations at operating detectors or the planning of projects with new detectors requires information on the expected signal and background rates. We present a web application for modeling global antineutrino energy spectra and detection rates for any surface location. Antineutrino sources include all registered nuclear reactors as well as the crust and mantle of Earth. Visitors to the website may model the location and power of a hypothetical nuclear reactor, copy energy spectra, and analyze the significance of a selected signal relative to background.

  7. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann;

    2011-01-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i...... to paleobiology include predictor variables (types and properties; special emphasis is given to paleoclimate), model validation (particularly important given the emphasis on cross-temporal predictions in paleobiological applications), and the integration of SDM and genetics approaches. Over the last few years...... – the equilibrium postulate, niche stability, changing atmospheric CO2 concentrations – as well as ways to address these (ensemble, functional SDM, and non-SDM ecoinformatics approaches). We conclude that the SDM approach offers important opportunities for advances in paleobiology by providing a quantitative...

  8. A conceptual holding model for veterinary applications

    Directory of Open Access Journals (Sweden)

    Nicola Ferrè

    2014-05-01

    Full Text Available Spatial references are required when geographical information systems (GIS are used for the collection, storage and management of data. In the veterinary domain, the spatial component of a holding (of animals is usually defined by coordinates, and no other relevant information needs to be interpreted or used for manipulation of the data in the GIS environment provided. Users trying to integrate or reuse spatial data organised in such a way, frequently face the problem of data incompatibility and inconsistency. The root of the problem lies in differences with respect to syntax as well as variations in the semantic, spatial and temporal representations of the geographic features. To overcome these problems and to facilitate the inter-operability of different GIS, spatial data must be defined according to a “schema” that includes the definition, acquisition, analysis, access, presentation and transfer of such data between different users and systems. We propose an application “schema” of holdings for GIS applications in the veterinary domain according to the European directive framework (directive 2007/2/EC - INSPIRE. The conceptual model put forward has been developed at two specific levels to produce the essential and the abstract model, respectively. The former establishes the conceptual linkage of the system design to the real world, while the latter describes how the system or software works. The result is an application “schema” that formalises and unifies the information-theoretic foundations of how to spatially represent a holding in order to ensure straightforward information-sharing within the veterinary community.

  9. Validation and application of the SCALP model

    Science.gov (United States)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO

  10. Spectral and chromatographic fingerprinting with analysis of variance-principal component analysis (ANOVA-PCA): a useful tool for differentiating botanicals and characterizing sources of variance

    Science.gov (United States)

    Objectives: Spectral fingerprints, acquired by direct injection (no separation) mass spectrometry (DI-MS) or liquid chromatography with UV detection (HPLC), in combination with ANOVA-PCA, were used to differentiate 15 powders of botanical materials. Materials and Methods: Powders of 15 botanical mat...

  11. Application of model systems in nanobiotechnology safety

    International Nuclear Information System (INIS)

    Full text : Last 10-15 years the human civilization, as a result of fast development of biotechnology, cases of new and known illnesses and increase of danger of bioterrorism, collides with new biological dangers. Now, all necessity of actions for biology for prevention of possible dangers admits. Nanobiotechnological researches and offers on application of the scientific results reached in this area prevail of all others. And thus, in many cases or it is at all left outside of attention possible harmful effects of application in an expert of nanoparticles, or it is limited to researches on subcellular level. Adequate results can be received only in case of carrying out of such researches on organism level. Greater prospects in this area have the model systems consisting the culture of unicellular green seaweed, on which now we have been studying the ionizing radiation influence effects. It speaks that on behalf of such cultures we have simultaneously cellular, organism and population levels of the structural organization. Some optimal laboratory methods of maintenance and propagating of this unicellular green seaweed have already been developed. The way offered was a studying at cellular-organism level of the structural organization of effects of action on vital systems of nanoparticles (especially what are offered for application in pharmaceutics) with use of culture of unicellular green seaweed Chlamydomonas reinhardti. Genes of many enzymes of this eucariotic seaweed are established, and also its perspective value in biological synthesis of hydrogen is shown. Studying of negative effects of action of nanoparticles in an example of the object, many molecular features of which are investigated, will allow to establish borders of safety of all biosystems.

  12. Genetic model compensation: Theory and applications

    Science.gov (United States)

    Cruickshank, David Raymond

    1998-12-01

    The adaptive filtering algorithm known as Genetic Model Compensation (GMC) was originally presented in the author's Master's Thesis. The current work extends this earlier work. GMC uses a genetic algorithm to optimize filter process noise parameters in parallel with the estimation of the state and based only on the observational information available to the filter. The original stochastic state model underlying GMC was inherited from the antecedent, non-adaptive Dynamic Model Compensation (DMC) algorithm. The current work develops the stochastic state model from a linear system viewpoint, avoiding the simplifications and approximations of the earlier development, and establishes Riemann sums as unbiased estimators of the stochastic integrals which describe the evolution of the random state components. These are significant developments which provide GMC with a solid theoretical foundation. Orbit determination is the area of application in this work, and two types of problems are studied: real-time autonomous filtering using absolute GPS measurements and precise post-processed filtering using differential GPS measurements. The first type is studied in a satellite navigation simulation in which pseudorange and pseudorange rate measurements are processed by an Extended Kalman Filter which incorporates both DMC and GMC. Both estimators are initialized by a geometric point solution algorithm. Using measurements corrupted by simulated Selective Availability errors, GMC reduces mean RSS position error by 6.4 percent, reduces mean clock bias error by 46 percent, and displays a marked improvement in covariance consistency relative to DMC. To study the second type of problem, GMC is integrated with NASA Jet Propulsion Laboratory's Gipsy/Oasis-II (GOA-II) precision orbit determination program creating an adaptive version of GOA-II's Reduced Dynamic Tracking (RDT) process noise formulation. When run as a sequential estimator with GPS measurements from the TOPEX satellite and

  13. Python-Based Applications for Hydrogeological Modeling

    Science.gov (United States)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The

  14. Seismic Physical Modeling Technology and Its Applications

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduces the seismic physical modeling technology in the CNPC Key Lab of Geophysical Exploration. It includes the seismic physical model positioning system, the data acquisition system, sources, transducers,model materials, model building techniques, precision measurements of model geometry, the basic principles of the seismic physical modeling and experimental methods, and two physical model examples.

  15. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The…

  16. Some applications of neural networks in microwave modeling

    Directory of Open Access Journals (Sweden)

    Milovanović Bratislav D.

    2003-01-01

    Full Text Available This paper presents some applications of neural networks in the microwave modeling. The applications are related to modeling of either passive or active structures and devices. Modeling is performed using not only simple multilayer perception network (MLP but also advanced knowledge based neural network (KBNN structures.

  17. Unsteady aerodynamics modeling for flight dynamics application

    Science.gov (United States)

    Wang, Qing; He, Kai-Feng; Qian, Wei-Qi; Zhang, Tian-Jiao; Cheng, Yan-Qing; Wu, Kai-Yuan

    2012-02-01

    In view of engineering application, it is practicable to decompose the aerodynamics into three components: the static aerodynamics, the aerodynamic increment due to steady rotations, and the aerodynamic increment due to unsteady separated and vortical flow. The first and the second components can be presented in conventional forms, while the third is described using a one-order differential equation and a radial-basis-function (RBF) network. For an aircraft configuration, the mathematical models of 6-component aerodynamic coefficients are set up from the wind tunnel test data of pitch, yaw, roll, and coupled yawroll large-amplitude oscillations. The flight dynamics of an aircraft is studied by the bifurcation analysis technique in the case of quasi-steady aerodynamics and unsteady aerodynamics, respectively. The results show that: (1) unsteady aerodynamics has no effect upon the existence of trim points, but affects their stability; (2) unsteady aerodynamics has great effects upon the existence, stability, and amplitudes of periodic solutions; and (3) unsteady aerodynamics changes the stable regions of trim points obviously. Furthermore, the dynamic responses of the aircraft to elevator deflections are inspected. It is shown that the unsteady aerodynamics is beneficial to dynamic stability for the present aircraft. Finally, the effects of unsteady aerodynamics on the post-stall maneuverability are analyzed by numerical simulation.

  18. Unsteady aerodynamics modeling for flight dynamics application

    Institute of Scientific and Technical Information of China (English)

    Qing Wang; Kai-Feng He; Wei-Qi Qian; Tian-Jiao Zhang; Yan-Qing Cheng; Kai-Yuan Wu

    2012-01-01

    In view of engineering application,it is practicable to decompose the aerodynamics into three components:the static aerodynamics,the aerodynamic increment due to steady rotations,and the aerodynamic increment due to unsteady separated and vortical flow.The first and the second components can be presented in conventional forms,while the third is described using a one-order differential equation and a radial-basis-function (RBF) network. For an aircraft configuration,the mathematical models of 6-component aerodynamic coefficients are set up from the wind tunnel test data of pitch,yaw,roll,and coupled yawroll large-amplitude oscillations.The flight dynamics of an aircraft is studied by the bifurcation analysis technique in the case of quasi-steady aerodynamics and unsteady aerodynamics,respectively.The results show that:(1) unsteady aerodynamics has no effect upon the existence of trim points,but affects their stability; (2) unsteady aerodynamics has great effects upon the existence,stability,and amplitudes of periodic solutions; and (3) unsteady aerodynamics changes the stable regions of trim points obviously.Furthermore,the dynamic responses of the aircraft to elevator deflections are inspected.It is shown that the unsteady aerodynamics is beneficial to dynamic stability for the present aircraft.Finally,the effects of unsteady aerodynamics on the post-stall maneuverability are analyzed by numerical simulation.

  19. Novel applications of the dispersive optical model

    CERN Document Server

    Dickhoff, W H; Mahzoon, M H

    2016-01-01

    A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (\\textit{d,p}) transfer reaction using the adiabatic distorted wave approximation (ADWA). We review the fully non-local DOM...

  20. Modelling of Electrokinetic Processes in Civil and Environmental Engineering Applications

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.;

    2011-01-01

    A mathematical model for the electrokinetic phenomena is described. Numerical simulations of different applications of electrokinetic techniques to the fields of civil and environmental engineering are included, showing the versatility and consistency of the model. The electrokinetics phenomena c...

  1. Plant growth and architectural modelling and its applications

    OpenAIRE

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this pref...

  2. Application of simulation models for the optimization of business processes

    Science.gov (United States)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  3. Using Model Driven Engineering technologies for building authoring applications

    OpenAIRE

    Beaudoux, Olivier; Blouin, Arnaud; Jézéquel, Jean-Marc

    2010-01-01

    Building authoring applications is a tedious and complex task that requires a high programming effort. Document technologies, especially XML based ones, can help in reducing such an effort by providing common bases for manipulating documents. Still, the overall task consists mainly of writing the application's source code. Model Driven Engineering (MDE) focuses on generating the source code from an exhaustive model of the application. In this paper, we illustrate that MDE technologies can be ...

  4. Photonic crystal fiber modelling and applications

    DEFF Research Database (Denmark)

    Bjarklev, Anders Overgaard; Broeng, Jes; Libori, Stig E. Barkou;

    2001-01-01

    Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented....

  5. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  6. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans;

    2004-01-01

    The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design is...... used for the examination. The CFD model is compared with benchmark tests and results from a special application fire simulation CFD code. Apart from benchmark tests two practical applications are examined in shape of modelling a fire in a theatre and a double façade, respectively. The simple CFD model...

  7. Pinna Model for Hearing Instrument Applications

    DEFF Research Database (Denmark)

    Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren Helstrup; Thaysen, Jesper; Jakobsen, Kaj Bjarne

    A novel model of the pinna (outer ear) is presented. This is to increase the understanding of the effect of the pinna on the on-body radiation pattern of an antenna placed inside the ear. Simulations of the model and of a realistically shaped ear are compared to validate the model. The radiation...

  8. Model Checking-Based Testing of Web Applications

    Institute of Scientific and Technical Information of China (English)

    ZENG Hongwei; MIAO Huaikou

    2007-01-01

    A formal model representing the navigation behavior of a Web application as the Kripke structure is proposed and an approach that applies model checking to test case generation is presented. The Object Relation Diagram as the object model is employed to describe the object structure of a Web application design and can be translated into the behavior model. A key problem of model checking-based test generation for a Web application is how to construct a set of trap properties that intend to cause the violations of model checking against the behavior model and output of counterexamples used to construct the test sequences.We give an algorithm that derives trap properties from the object model with respect to node and edge coverage criteria.

  9. Optimization of Process Parameters During Drilling of Glass-Fiber Polyester Reinforced Composites Using DOE and ANOVA

    Directory of Open Access Journals (Sweden)

    N.S. Mohan

    2010-09-01

    Full Text Available Polymer-based composite material possesses superior properties such as high strength-to-weight ratio, stiffness-to-weight ratio and good corrosive resistance and therefore, is attractive for high performance applications such as in aerospace, defense and sport goods industries. Drilling is one of the indispensable methods for building products with composite panels. Surface quality and dimensional accuracy play an important role in the performance of a machined component. In machining processes, however, the quality of the component is greatly influenced by the cutting conditions, tool geometry, tool material, machining process, chip formation, work piece material, tool wear and vibration during cutting. Drilling tests were conducted on glass fiber reinforced plastic composite [GFRP] laminates using an instrumented CNC milling center. A series of experiments are conducted using TRIAC VMC CNC machining center to correlate the cutting parameters and material parameters on the cutting thrust, torque and surface roughness. The measured results were collected and analyzed with the help of the commercial software packages MINITAB14 and Taly Profile. The surface roughness of the drilled holes was measured using Rank Taylor Hobson Surtronic 3+ instrument. The method could be useful in predicting thrust, torque and surface roughness parameters as a function of process variables. The main objective is to optimize the process parameters to achieve low cutting thrust, torque and good surface roughness. From the analysis it is evident that among all the significant parameters, speed and drill size have significant influence cutting thrust and drill size and specimen thickness on the torque and surface roughness. It was also found that feed rate does not have significant influence on the characteristic output of the drilling process.

  10. Application of Actuarial Modelling in Insurance Industry

    OpenAIRE

    Burcã Ana-Maria; Bãtrînca Ghiorghe

    2011-01-01

    In insurance industry, the financial stability of insurance companies represents an issue of vital importance. In order to maintain the financial stability and meet minimum regulatory requirements, actuaries apply actuarial modeling. Modeling has been at the center of actuarial science and of all the sciences from the beginning of their journey. In insurance industry, actuarial modeling creates a framework that allows actuaries to identify, understand, quantify and manage a wide range of risk...

  11. Surface Flux Modeling for Air Quality Applications

    OpenAIRE

    Limei Ran; Jonathan Pleim

    2011-01-01

    For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by...

  12. Non-linear models: applications in economics

    OpenAIRE

    Albu, Lucian-Liviu

    2006-01-01

    The study concentrated on demonstrating how non-linear modelling can be useful to investigate the behavioural of dynamic economic systems. Using some adequate non-linear models could be a good way to find more refined solutions to actually unsolved problems or ambiguities in economics. Beginning with a short presentation of the simplest non-linear models, then we are demonstrating how the dynamics of complex systems, as the economic system is, could be explained on the base of some more advan...

  13. Pinna Model for Hearing Instrument Applications

    OpenAIRE

    Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren Helstrup; Thaysen, Jesper; Jakobsen, Kaj Bjarne

    2014-01-01

    A novel model of the pinna (outer ear) is presented. This is to increase the understanding of the effect of the pinna on the on-body radiation pattern of an antenna placed inside the ear. Simulations of the model and of a realistically shaped ear are compared to validate the model. The radiation patterns, including the phase and gain, and the radiation efficiency are compared.

  14. ANOVA: Centro de apoio e intervenção na crise para crianças vítimas de maus tratos

    OpenAIRE

    Coimbra, Alexandra; Faria, Ana; Montano, Teresa

    1990-01-01

    Neste artigo, são apresentadas de uma forma sumária as directrizes de funcionamento da ANOVA - Centro de Apoio e Intervenção na Crise para Crianças Vítimas de Maus natos. Na base da formulação do projecto que deu origem ao Centro estiveram os conceitos teóricos da Psicologia Comunitária (McGee, 1974; Rappaport, 1977; Caplan, 1980; Gottlieb, 1981; Repucci, 1987; e outros). Destaca-se a importância do desenvolvimento simultâneo de medidas de avaliação, suporte e ...

  15. Improved grey derivative of grey Verhulst model and its application

    OpenAIRE

    Yi-Zhang

    2012-01-01

    Based the principle and characteristic of grey Verhulst mode, the cause of grey Verhulst models inaccuracy is analyzed , the new formula of grey derivative is strutted and the unbiased grey Verhulst model is given in this paper. The new modeling method improves the simulation precision and extends the application scope of grey verhulst model. Some examples are also given to show that the precision of the new model is very high.

  16. Nuclear structure models: Applications and development

    International Nuclear Information System (INIS)

    This report discusses the following topics: Studies of superdeformed States; Signature Inversion in Odd-Odd Nuclei: A fingerprint of Triaxiality; Signature Inversion in 120Cs - Evidence for a Residual p-n Interaction; Signatures of γ Deformation in Nuclei and an Application to 125Xe; Nuclear Spins and Moments: Fundamental Structural Information; and Electromagnetic Properties of 181Ir: Evidence of β Stretching

  17. Nuclear reaction modeling, verification experiments, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  18. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  19. Optical Coherence Tomography: Modeling and Applications

    DEFF Research Database (Denmark)

    Thrane, Lars

    An analytical model is presented that is able to describe the performance of OCT systems in both the single and multiple scattering regimes simultaneously. This model inherently includes the shower curtain effect, well-known for light propagation through the atmosphere. This effect has been omitt...

  20. SAI (SYSTEMS APPLICATIONS, INCORPORATED) URBAN AIRSHED MODEL

    Science.gov (United States)

    The magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air quality simulation model that is well suited for predicting the spatial and temporal distribution of photoch...

  1. Human hand modelling: kinematics, dynamics, applications

    NARCIS (Netherlands)

    Gustus, A.; Stillfried, G.; Visser, J.; Jörntell, H.; Van der Smagt, P.

    2012-01-01

    An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand funct

  2. Modeling of Nuclear Electric Propulsion System for Naval Application

    Energy Technology Data Exchange (ETDEWEB)

    Halimi, B.; Suh, K. Y. [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    In a number of applications it is required to work for a long periods of time on the ocean, where supply of fuel is complicated and sometimes impossible. Moreover, high efficiency and compactness are the other important requirements in naval application. Therefore, an integrated nuclear electric propulsion system is the best choice to meet all of these requirements. In this paper, a modeling of nuclear electric propulsion for naval application is presented. The model adopted a long-term power system dynamics model to represent the dynamics of nuclear power part.

  3. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  4. Air quality modeling for emergency response applications

    International Nuclear Information System (INIS)

    The three-dimensional diagnostic wind field model (MATHEW) and the particle-in-cell transport and diffusion model (ADPIC) are used by the Atmospheric Release Advisory Capability (ARAC) for real-time assessments of the consequences from accidental releases of radioactivity into the atmosphere. For the dispersion of hazardous heavier-than-air gases, a time-dependent, three-dimensional finite element model (FEM3) is used. These models have been evaluated extensively against a wide spectrum of field experiments involving the release of chemically inert tracers or heavier-than-air gases. The results reveal that the MATHEW/ADPIC models are capable of simulating the spatial and temporal distributions of tracer concentration to within a factor of 2 for 50% of the measured tracer concentrations for near surface releases in relatively flat terrain and within a factor of 2 for 20% of the comparisons for elevated releases in complex terrain. The FEM3 model produces quite satisfactory simulations of the spatial and temporal distributions of heavier-than-air gases, typically within a kilometer of the release point. The ARAC consists of a centralized computerized emergency response system that is capable of supporting up to 100 sites and providing real-time predictions of the consequence of transportation accidents that may occur anywhere. It utilizes pertinent accident information, local and regional meteorology, and terrain as input to the MATHEW/ADPIC models for the consequence analysis. It has responded to over 150 incidents and exercises over the past decade

  5. Development of ECP models for BWR applications

    International Nuclear Information System (INIS)

    The electrochemical corrosion potential (ECP) of stainless steel has been measured under simulated Boiling Water Reactor (BWR) coolant circuit conditions using a rotating cylinder electrode. Based on the results of measurements an empirical model has been developed to predict the ECP of structure materials in a BVTR primary circuit as a function of H2, O2, and H2O2 concentrations in reactor coolant and water flow velocity. The ECP modeling results using the H2, O2, and H2O2 concentrations calculated by the radiolysis model are compared with the available reactor internal ECP data obtained in an operating reactor

  6. Computational modeling of nanomaterials for biomedical applications

    OpenAIRE

    Verkhovtsev, Alexey

    2016-01-01

    Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological envi...

  7. The DES-model and its applications

    International Nuclear Information System (INIS)

    This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular the efficiencies and marginal costs of combined heat and power (CHP). Some associated models are briefly outlined, and the use of the model is described by examples concerning scenarios for the primary energy requirements and energy system costs up to the year 2000, planned development of the power and heating systems, assessment of nuclear power, and effects of changes in the energy supply system on the emissions of SO2 and NOsub(x). (author)

  8. Mathematical modeling and applications in nonlinear dynamics

    CERN Document Server

    Merdan, Hüseyin

    2016-01-01

    The book covers nonlinear physical problems and mathematical modeling, including molecular biology, genetics, neurosciences, artificial intelligence with classical problems in mechanics and astronomy and physics. The chapters present nonlinear mathematical modeling in life science and physics through nonlinear differential equations, nonlinear discrete equations and hybrid equations. Such modeling can be effectively applied to the wide spectrum of nonlinear physical problems, including the KAM (Kolmogorov-Arnold-Moser (KAM)) theory, singular differential equations, impulsive dichotomous linear systems, analytical bifurcation trees of periodic motions, and almost or pseudo- almost periodic solutions in nonlinear dynamical systems. Provides methods for mathematical models with switching, thresholds, and impulses, each of particular importance for discontinuous processes Includes qualitative analysis of behaviors on Tumor-Immune Systems and methods of analysis for DNA, neural networks and epidemiology Introduces...

  9. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  10. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  11. Modeling Substance Abuse for Applications in Proteomics

    OpenAIRE

    Hemby, Scott Edwards; Tannu, Nilesh

    2009-01-01

    The ability to model aspects of human addictive behaviors in laboratory animals provides an important avenue for gaining insight into the biochemical alterations associated with drug intake and the identification of targets for medication development to treat addictive disorders. The intravenous self-administration procedure provides the means to model the reinforcing effects of abused drugs and to correlate biochemical alterations with drug reinforcement. In this chapter, we provide a detail...

  12. Identification of regression models - application in traffic

    Czech Academy of Sciences Publication Activity Database

    Dohnal, Pavel

    Ljubljana : Jozef Stefan Institute, 2005, s. 1-5. [International PhD Workshop on Systems and Control a Young Generation Viewpoint /6./. Izola (SI), 04.10.2005-08.10.2005] R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : regression model * model order * intensity of traffic flow * prediction Subject RIV: BC - Control Systems Theory

  13. Application of Chebyshev Polynomial to simulated modeling

    Institute of Scientific and Technical Information of China (English)

    CHI Hai-hong; LI Dian-pu

    2006-01-01

    Chebyshev polynomial is widely used in many fields, and used usually as function approximation in numerical calculation. In this paper, Chebyshev polynomial expression of the propeller properties across four quadrants is given at first, then the expression of Chebyshev polynomial is transformed to ordinary polynomial for the need of simulation of propeller dynamics. On the basis of it,the dynamical models of propeller across four quadrants are given. The simulation results show the efficiency of mathematical model.

  14. A Component-based Programming Model for Composite, Distributed Applications

    Science.gov (United States)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  15. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  16. The Channel Network model and field applications

    International Nuclear Information System (INIS)

    The Channel Network model describes the fluid flow and solute transport in fractured media. The model is based on field observations, which indicate that flow and transport take place in a three-dimensional network of connected channels. The channels are generated in the model from observed stochastic distributions and solute transport is modeled taking into account advection and rock interactions, such as matrix diffusion and sorption within the rock. The most important site-specific data for the Channel Network model are the conductance distribution of the channels and the flow-wetted surface. The latter is the surface area of the rock in contact with the flowing water. These parameters may be estimated from hydraulic measurements. For the Aespoe site, several borehole data sets are available, where a packer distance of 3 meters was used. Numerical experiments were performed in order to study the uncertainties in the determination of the flow-wetted surface and conductance distribution. Synthetic data were generated along a borehole and hydraulic tests with different packer distances were simulated. The model has previously been used to study the Long-term Pumping and Tracer Test (LPT2) carried out in the Aespoe Hard Rock Laboratory (HRL) in Sweden, where the distance travelled by the tracers was of the order hundreds of meters. Recently, the model has been used to simulate the tracer tests performed in the TRUE experiment at HRL, with travel distance of the order of tens of meters. Several tracer tests with non-sorbing and sorbing species have been performed

  17. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  18. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  19. Deriving multiple interfaces from task models of nomadic applications

    OpenAIRE

    Patern?, Fabio

    2001-01-01

    The wide availability of many types of devices has become a fundamental challenge for designers of interactive software systems. Here we discuss a model-based method for the design of nomadic applications and the types of transformations that it requires to support the design of such applications. The aim is to enable each interaction device to support appropriate tasks that users expect to perform and designers to develop the various device specific application modules in a consistent manner.

  20. Expansion of the USDA ARS Aerial Application spray atomization models

    Science.gov (United States)

    An effort is underway to update the USDA ARS aerial spray nozzle models using new droplet sizing instrumen-tation and measurement techniques. As part of this effort, the applicable maximum airspeed is being increased from 72 to 80 m/s to provide guidance to applicators when using new high speed air...

  1. Using models to determine irrigation applications for water management

    Science.gov (United States)

    Simple models are used by field researchers and production agriculture to estimate crop water use for the purpose of scheduling irrigation applications. These are generally based on a simple volume balance approach based on estimates of soil water holding capacity, irrigation application amounts, pr...

  2. Fuzzy modeling and control theory and applications

    CERN Document Server

    Matía, Fernando; Jiménez, Emilio

    2014-01-01

    Much work on fuzzy control, covering research, development and applications, has been developed in Europe since the 90's. Nevertheless, the existing books in the field are compilations of articles without interconnection or logical structure or they express the personal point of view of the author. This book compiles the developments of researchers with demonstrated experience in the field of fuzzy control following a logic structure and a unified the style. The first chapters of the book are dedicated to the introduction of the main fuzzy logic techniques, where the following chapters focus o

  3. Recognizing textual entailment models and applications

    CERN Document Server

    Dagan, Ido; Sammons, Mark

    2013-01-01

    In the last few years, a number of NLP researchers have developed and participated in the task of Recognizing Textual Entailment (RTE). This task encapsulates Natural Language Understanding capabilities within a very simple interface: recognizing when the meaning of a text snippet is contained in the meaning of a second piece of text. This simple abstraction of an exceedingly complex problem has broad appeal partly because it can be conceived also as a component in other NLP applications, from Machine Translation to Semantic Search to Information Extraction. It also avoids commitment to any sp

  4. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  5. Graphite oxidation modeling for application in MELCOR.

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, Fred

    2009-01-01

    The Arrhenius parameters for graphite oxidation in air are reviewed and compared. One-dimensional models of graphite oxidation coupled with mass transfer of oxidant are presented in dimensionless form for rectangular and spherical geometries. A single dimensionless group is shown to encapsulate the coupled phenomena, and is used to determine the effective reaction rate when mass transfer can impede the oxidation process. For integer reaction order kinetics, analytical expressions are presented for the effective reaction rate. For noninteger reaction orders, a numerical solution is developed and compared to data for oxidation of a graphite sphere in air. Very good agreement is obtained with the data without any adjustable parameters. An analytical model for surface burn-off is also presented, and results from the model are within an order of magnitude of the measurements of burn-off in air and in steam.

  6. The Application Model of Moving Objects in Cargo Delivery System

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng-li; ZHOU Ming-tian; XU Bo

    2004-01-01

    The development of spatio-temporal database systems is primarily motivated by applications which track and present mobile objects. In this paper, solutions for establishing the moving object database based on GPS/GIS environment are presented, and a data modeling of moving object is given by using Temporal logical to extent the query language, finally the application model in cargo delivery system is shown.

  7. Network models in optimization and their applications in practice

    CERN Document Server

    Glover, Fred; Phillips, Nancy V

    2011-01-01

    Unique in that it focuses on formulation and case studies rather than solutions procedures covering applications for pure, generalized and integer networks, equivalent formulations plus successful techniques of network models. Every chapter contains a simple model which is expanded to handle more complicated developments, a synopsis of existing applications, one or more case studies, at least 20 exercises and invaluable references. An Instructor's Manual presenting detailed solutions to all the problems in the book is available upon request from the Wiley editorial department.

  8. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  9. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  10. Model-based clustering using copulas with applications

    OpenAIRE

    Kosmidis, Ioannis; Karlis, Dimitris

    2014-01-01

    The majority of model-based clustering techniques is based on multivariate Normal models and their variants. In this paper copulas are used for the construction of flexible families of models for clustering applications. The use of copulas in model-based clustering offers two direct advantages over current methods: i) the appropriate choice of copulas provides the ability to obtain a range of exotic shapes for the clusters, and ii) the explicit choice of marginal distributions for the cluster...

  11. Advances in Application of Models in Soil Quality Evaluation

    OpenAIRE

    Si, Zhi-guo; Wang, Ji-jie; Yu, Yuan-chun; Liang, Guan-feng; Chen, Chang-ren; Shu, Hong-lan

    2012-01-01

    Soil quality is a comprehensive reflection of soil properties. Since the soil quality concept was put forward in the 1970s, the quality of different type soils in different regions have been evaluated through a variety of evaluation methods, but it still lacks universal soil quantity evaluation models and methods. In this paper, the applications and prospects of grey relevancy comprehensive evaluation model, attribute hierarchical model, fuzzy comprehensive evaluation model, matter-element mo...

  12. Atmospheric dispersion models for application in relation to radionuclide releases

    International Nuclear Information System (INIS)

    In this document, a state-of-art review of dispersion models relevant to local, regional and global scales and applicable to radionuclide discharges of a continuous and discontinuous nature is presented. The theoretical basis of the models is described in chapter 2, while the uncertainty inherent in model predictions is considered in chapter 6. Chapters 3 to 5 of this report describe a number of models for calculating atmospheric dispersion on local, regional and global scales respectively

  13. Application of Prognostic Mesoscale Modeling in the Southeast United States

    International Nuclear Information System (INIS)

    A prognostic model is being used to provide regional forecasts for a variety of applications at the Savannah River Site (SRS). Emergency response dispersion models available at SRS use the space and time-dependent meteorological data provided by this model to supplement local and regional observations. Output from the model is also used locally to aid in forecasting at SRS, and regionally in providing forecasts of the potential time and location of hurricane landfall within the southeast United States

  14. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  15. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  16. (spdf) interacting boson model and its application

    International Nuclear Information System (INIS)

    The group structure and the general form of Hamiltonian of (spdf) interaction boson model are discussed. The energy spectra and the E1,E2 and E3 transition rates of 144Ba and 152Sm are calculated. The results agree with the experimental data quite well

  17. Adaptable Multivariate Calibration Models for Spectral Applications

    Energy Technology Data Exchange (ETDEWEB)

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  18. A marketing model: applications for dietetic professionals.

    Science.gov (United States)

    Parks, S C; Moody, D L

    1986-01-01

    Traditionally, dietitians have communicated the availability of their services to the "public at large." The expectation was that the public would respond favorably to nutrition programs simply because there was a consumer need for them. Recently, however, both societal and consumer needs have changed dramatically, making old communication strategies ineffective and obsolete. The marketing discipline has provided a new model and new decision-making tools for many health professionals to use to more effectively make their services known to multiple consumer groups. This article provides one such model as applied to the dietetic profession. The model explores a definition of the business of dietetics, how to conduct an analysis of the environment, and, finally, the use of both in the choice of new target markets. Further, the model discusses the major components of developing a marketing strategy that will help the practitioner to be competitive in the marketplace. Presented are strategies for defining and re-evaluating the mission of the profession, for using future trends to identify new markets and roles for the profession, and for developing services that make the profession more competitive by better meeting the needs of the consumer. PMID:3079782

  19. Integrated Safety Culture Model and Application

    Institute of Scientific and Technical Information of China (English)

    汪磊; 孙瑞山; 刘汉辉

    2009-01-01

    A new safety culture model is constructed and is applied to analyze the correlations between safety culture and SMS. On the basis of previous typical definitions, models and theories of safety culture, an in-depth analysis on safety culture's structure, composing elements and their correlations was conducted. A new definition of safety culture was proposed from the perspective of sub-cuhure. 7 types of safety sub-culture, which are safety priority culture, standardizing culture, flexible culture, learning culture, teamwork culture, reporting culture and justice culture were defined later. Then integrated safety culture model (ISCM) was put forward based on the definition. The model divided safety culture into intrinsic latency level and extrinsic indication level and explained the potential relationship between safety sub-culture and all safety culture dimensions. Finally in the analyzing of safety culture and SMS, it concluded that positive safety culture is the basis of im-plementing SMS effectively and an advanced SMS will improve safety culture from all around.

  20. A universal throw model and its applications

    NARCIS (Netherlands)

    Voort, M.M. van der; Doormaal, J.C.A.M. van; Verolme, E.K.; Weerheijm, J.

    2008-01-01

    A deterministic model has been developed that describes the throw of debris or fragments from a source with an arbitrary geometry and for arbitrary initial conditions. The initial conditions are defined by the distributions of mass, launch velocity and launch direction. The item density in an expose

  1. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  2. Resultados - Anova Factorial Entre Sujetos

    OpenAIRE

    Serra Añó, Pilar; Ponce Darós, María José; López Bueno, Laura; González Moreno, Luis Millán; García Massó, Xavier; Anova Factorial Entre Sujetos

    2014-01-01

    Quinto vídeo de una serie de seis, donde se muestra la ejecución del análisis de varianza con el programa estadístico SPSS para comprobar diferencia de medias entre k grupos, en el área de las ciencias de la salud;

  3. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  4. Initiating Events Modeling for On-Line Risk Monitoring Application

    International Nuclear Information System (INIS)

    In order to make on-line risk monitoring application of Probabilistic Risk Assessment more complete and realistic, a special attention need to be dedicated to initiating events modeling. Two different issues are of special importance: one is how to model initiating events frequency according to current plant configuration (equipment alignment and out of service status) and operating condition (weather and various activities), and the second is how to preserve dependencies between initiating events model and rest of PRA model. First, the paper will discuss how initiating events can be treated in on-line risk monitoring application. Second, practical example of initiating events modeling in EPRI's Equipment Out of Service on-line monitoring tool will be presented. Gains from application and possible improvements will be discussed in conclusion. (author)

  5. Adaptive Networks Theory, Models and Applications

    CERN Document Server

    Gross, Thilo

    2009-01-01

    With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.

  6. Potential model application and planning issues

    Directory of Open Access Journals (Sweden)

    Christiane Weber

    2000-03-01

    Full Text Available Le modèle de potentiel a été et reste un modèle d'interaction spatiale utilisé pour diverses problématiques en sciences humaines, cependant l'utilisation qu'en ont fait Donnay (1997,1995,1994 et Binard (1995 en introduisant des résultats de traitement d'images comme support d'application a ouvert la voie à des applications novatrice par exemple, pour la détermination de la limite urbaine ou des hinterlands locaux. Les articulations possibles entre application du modèle de potentiel en imagerie et utilisation de plans de Système d'Information Géographique ont permis l'évaluation temporelle des tendances de développement urbain (Weber,1998. Reprenant cette idée, l'étude proposée tente d'identifier les formes de développement urbain de la Communauté urbaine de Strasbourg (CUS en tenant compte de l'occupation du sol, des caractéristiques des réseaux de communication, des réglementations urbaines et des contraintes environnementales qui pèsent sur la zone d'étude. L'état initial de l'occupation du sol, obtenu par traitements statistiques, est utilisé comme donnée d'entrée du modèle de potentiel afin d'obtenir des surfaces de potentiel associées à des caractéristiques spatiales spécifiques soit  : l'extension de la forme urbaine, la préservation des zones naturelles ou d'agricultures, ou encore les réglementations. Les résultats sont ensuite combinés et classés. Cette application a été menée pour confronter la méthode au développement réel de la CUS déterminé par une étude diachronique par comparaison d'images satellites (SPOT1986- SPOT1998. Afin de vérifier l'intérêt et la justesse de la méthode les résultats satellites ont été opposés à ceux issus de la classification des surfaces de potentiel. Les zones de développement identifiées en fonction du modèle de potentiel ont été confirmées par les résultats de l'analyse temporelle faite sur les images. Une différenciation de zones en

  7. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  8. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  9. Model-Driven Approach for Body Area Network Application Development

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  10. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  11. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  12. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  13. Application of an analytical phase transformation model

    Institute of Scientific and Technical Information of China (English)

    LIU Feng; WANG Hai-feng; YANG Chang-lin; CHEN Zheng; YANG Wei; YANG Gen-cang

    2006-01-01

    Employing isothermal and isochronal differential scanning calorimetry, an analytical phase transformation model was used to study the kinetics of crystallization of amorphous Mg82.3Cu17.7 and Pd40Cu30P20Ni10 alloys. The analytical model comprised different combinations of various nucleation and growth mechanisms for a single transformation. Applying different combinations of nucleation and growth mechanisms, the nucleation and growth modes and the corresponding kinetic and thermodynamic parameters, have been determined. The influence of isothermal pre-annealing on subsequent isochronal crystallization kinetics with the increase of pre-annealing can be analyzed. The results show that the changes of the growth exponent, n, and the effective overall activation energy Q, occurring as function of the degree of transformation, do not necessarily imply a change of nucleation and growth mechanisms, i.e. such changes can occur while the transformation is isokinetic.

  14. Molecular modeling and multiscaling issues for electronic material applications

    CERN Document Server

    Iwamoto, Nancy; Yuen, Matthew; Fan, Haibo

    Volume 1 : Molecular Modeling and Multiscaling Issues for Electronic Material Applications provides a snapshot on the progression of molecular modeling in the electronics industry and how molecular modeling is currently being used to understand material performance to solve relevant issues in this field. This book is intended to introduce the reader to the evolving role of molecular modeling, especially seen through the eyes of the IEEE community involved in material modeling for electronic applications.  Part I presents  the role that quantum mechanics can play in performance prediction, such as properties dependent upon electronic structure, but also shows examples how molecular models may be used in performance diagnostics, especially when chemistry is part of the performance issue.  Part II gives examples of large-scale atomistic methods in material failure and shows several examples of transitioning between grain boundary simulations (on the atomistic level)and large-scale models including an example ...

  15. Application of pyrolysis models in COCOSYS

    International Nuclear Information System (INIS)

    For the assessment of the efficiency of severe accident management measures the simulation of severe accident development, progression and potential consequences in containments of nuclear power plants is required under conditions as realistic as possible. Therefore, the containment code item (COCOSYS) has been developed by GRS. The main objective is to provide a code system on the basis of mechanistic models for the comprehensive simulation of all relevant processes and plant states during severe accidents in the containment of light water reactors also covering the design basis accidents. In this context the simulation of oil and cable fires is of high priority. These processes strongly depend on the thermal hydraulic boundary conditions. An input-definition of the pyrolysis rate by the user is not consistent with the philosophy of COCOSYS. Therefore, a first attempt has been made for the code internal simulation of the pyrolysis rate and the following combustion process for oil and cable fires. The oil fire model used has been tested against the HDR E41.7 experiment. Because the cable fire model is still under development, a so-called 'simplified cable burning' model has been implemented in COCOSYS and tested against the HDR E42 cable fire experiments. Furthermore, in the frame of the bilateral (between German and Ukrainian government) project INT9131 in the field of fire safety at nuclear power plants (NPP), an exemplary fire hazard analysis (FHA) has been carried out for the cable spreading rooms below the unit control room of a VVER-1000/W-320 type reference plant. (authors)

  16. Wavelet Applications to Heterogeneous Agents Model

    Czech Academy of Sciences Publication Activity Database

    Vošvrda, Miloslav; Vácha, Lukáš

    Plzeň : University of West Bohemia in Pilsen, 2006 - (Lukáš, L.), s. 497-502 ISBN 978-80-7043-480-2. [Mathematical Methods in Economics 2006. Plzeň (CZ), 13.09.2006-15.09. 2006] R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : agent's trading strategies * heterogeneous agent model with stochastic memory * Worst out Algorithm * wavelets Subject RIV: AH - Economics

  17. The Parton Model and its Applications

    CERN Document Server

    Yan, Tung-Mow

    2014-01-01

    This is a review of the program we started in 1968 to understand and generalize Bjorken scaling and Feynman's parton model in a canonical quantum field theory. It is shown that the parton model proposed for deep inelastic electron scatterings can be derived if a transverse momentum cutoff is imposed on all particles in the theory so that the impulse approximation holds. The deep inelastic electron-positron annihilation into a nucleon plus anything else is related by the crossing symmetry of quantum field theory to the deep inelastic electron-nucleon scattering. We have investigated the implication of crossing symmetry and found that the structure functions satisfy a scaling behavior analogous to the Bjorken limit for deep inelastic electron scattering. We then find that massive lepton pair production in collisions of two high energy hadrons can be treated by the parton model with an interesting scaling behavior for the differential cross sections. This turns out to be the first example of a class of hard proc...

  18. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  19. Hydromechanical modelling with application in sealing for underground waste deposition

    International Nuclear Information System (INIS)

    Hydro-mechanical models appear in simulation of many environmental problems related to construction of engineering barriers for contaminant spreading. The presented work aims in modelling bentonite-sand barriers, which can be used for nuclear waste isolation and similar problems. Particularly, we use hydro-mechanical model coupling unsaturated flow and (nonlinear) elasticity, implement such model in COMSOL software and show application in simulation of an infiltration test (2D axisymmetric model) and the SEALEX Water test WT1 experiment (3D model). Finally, we discuss the needs and possibilities of parallel high performance computing

  20. Application of product modelling - seen from a work preparation viewpoint

    DEFF Research Database (Denmark)

    Hvam, Lars

    work in the planning systems. The other element covers general techniques for analysing and modeling knowledge and information, with special focus on object oriented modeling. The third element covers four different examples of product models. The product models are viewed as reference models for...... procedure from analysing the task of the system, over building a model, and to the final programming of an application. It has been stressed out to carry out all the phases in the outline of procedure in the empirical work, one of the reasons being to prove that it is possible, with a reasonable consumption...

  1. Mobile Cloud Application Models Facilitated by the CPA†

    Directory of Open Access Journals (Sweden)

    Michael J. O’Sullivan

    2015-02-01

    Full Text Available This paper describes implementations of three mobile cloud applications, file synchronisation, intensive data processing, and group-based collaboration, using the Context Aware Mobile Cloud Services middleware, and the Cloud Personal Assistant. Both are part of the same mobile cloud project, actively developed and currently at the second version. We describe recent changes to the middleware, along with our experimental results of the three application models. We discuss challenges faced during the development of the middleware and their implications. The paper includes performance analysis of the CPA support for applications in respect to existing solutions where appropriate, and highlights the advantages of these applications with use-cases.

  2. Applications of GARCH models to energy commodities

    Science.gov (United States)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric

  3. Ocean modelling aspects for drift applications

    Science.gov (United States)

    Stephane, L.; Pierre, D.

    2010-12-01

    Nowadays, many authorities in charge of rescue-at-sea operations lean on operational oceanography products to outline research perimeters. Moreover, current fields estimated with sophisticated ocean forecasting systems can be used as input data for oil spill/ adrift object fate models. This emphasises the necessity of an accurate sea state forecast, with a mastered level of reliability. This work focuses on several problems inherent to drift modeling, dealing in the first place with the efficiency of the oceanic current field representation. As we want to discriminate the relevance of a particular physical process or modeling option, the idea is to generate series of current fields of different characteristics and then qualify them in term of drift prediction efficiency. Benchmarked drift scenarios were set up from real surface drifters data, collected in the Mediterranean sea and off the coasts of Angola. The time and space scales that we are interested in are about 72 hr forecasts (typical timescale communicated in case of crisis), for distance errors that we hope about a few dozen of km around the forecast (acceptable for reconnaissance by aircrafts) For the ocean prediction, we used some regional oceanic configurations based on the NEMO 2.3 code, nested into Mercator 1/12° operational system. Drift forecasts were computed offline with Mothy (Météo France oil spill modeling system) and Ariane (B. Blanke, 1997), a Lagrangian diagnostic tool. We were particularly interested in the importance of the horizontal resolution, vertical mixing schemes, and any processes that may impact the surface layer. The aim of the study is to ultimately point at the most suitable set of parameters for drift forecast use inside operational oceanic systems. We are also motivated in assessing the relevancy of ensemble forecasts regarding determinist predictions. Several tests showed that mis-described observed trajectories can finally be modelled statistically by using uncertainties

  4. Measurement-based load modeling: Theory and application

    Institute of Scientific and Technical Information of China (English)

    MA; Jin; HAN; Dong; HE; RenMu

    2007-01-01

    Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.

  5. MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    CHAHINE Georges L.; HSIAO Chao-Tsung

    2012-01-01

    Controlling mierobubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge,which can be achieved only through a combination of experimental and numerical/analytical techniques.The present communication presents a multi-physics approach to study the dynamics combining viscousinviseid effects,liquid and structure dynamics,and multi bubble interaction.While complex numerical tools are developed and used,the study aims at identifying the key parameters influencing the dynamics,which need to be included in simpler models.

  6. Cloud-enabled Web Applications for Environmental Modelling

    Science.gov (United States)

    Vitolo, C.; Buytaert, W.; El-khatib, Y.; Gemmell, A. L.; Reaney, S. M.; Beven, K.

    2012-12-01

    In order to integrate natural and social science, especially in the light of current environmental legislation, efficient management and decision making requires environmental modelling to be easily accessible, portable and flexible. Deploying models as web applications is a feasible solution to some of the above issues. However migrating desktop-based modelling platforms to web based applications is not trivial. The framework in which the models are deployed should comply with worldwide accepted web standards to allow interoperability and ease exchange of information with external sources. Also the chosen models should guarantee a certain degree of flexibility to adapt the modelling exercise to different purposes. In this study we propose an innovative approach to web-modelling, developed as part of the NERC's Environmental Virtual Observatory pilot (EVOp) project for the UK. The proposed approach combines the use of Google Maps APIs to explore available data and the PyWPS implementation of the Open Geospatial Consortium Web Processing Service standard (OGC-WPS) to deploy models implemented in programming languages such as R and Python. As proof-of-concept, a web application was implemented, on the EVOp portal, to assist local communities with local flooding in the Eden catchment in Cumbria (UK). The application simulates the impact of land-use scenarios using the hydrological model Topmodel (Beven and Kirkby, 1979) implemented as a web service using the aforementioned approach. Current developments include the implementation of web applications for diffuse pollution, which adopts the Export Coefficient Model (Jones, 1996), and national flooding which utilises the hydrological model ensemble FUSE (Clark et al., 2008). Topmodel and FUSE are already exposed as stateless OGC-compliant web services. In the future we also aim to produce tools to help manage drought impacts and ecosystem services. The authors would like to thank the valuable contributions of the whole

  7. A review of thermoelectric cooling: Materials, modeling and applications

    International Nuclear Information System (INIS)

    This study reviews the recent advances of thermoelectric materials, modeling approaches, and applications. Thermoelectric cooling systems have advantages over conventional cooling devices, including compact in size, light in weight, high reliability, no mechanical moving parts, no working fluid, being powered by direct current, and easily switching between cooling and heating modes. In this study, historical development of thermoelectric cooling has been briefly introduced first. Next, the development of thermoelectric materials has been given and the achievements in past decade have been summarized. To improve thermoelectric cooling system's performance, the modeling techniques have been described for both the thermoelement modeling and thermoelectric cooler (TEC) modeling including standard simplified energy equilibrium model, one-dimensional and three-dimensional models, and numerical compact model. Finally, the thermoelectric cooling applications have been reviewed in aspects of domestic refrigeration, electronic cooling, scientific application, and automobile air conditioning and seat temperature control, with summaries for the commercially available thermoelectric modules and thermoelectric refrigerators. It is expected that this study will be beneficial to thermoelectric cooling system design, simulation, and analysis. - Highlights: •Thermoelectric cooling has great prospects with thermoelectric material's advances. •Modeling techniques for both thermoelement and TEC have been reviewed. •Principle thermoelectric cooling applications have been reviewed and summarized

  8. Modelling for Bio-,Agro- and Pharma-Applications

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Singh, Ravendra; Cameron, Ian;

    2011-01-01

    approach for meso and microscale partial models. The specific case study of codeine release is examined. As a bio- application, a batch fermentation process is modelled. This involves the generation of a pre-cursor compound for insulin production.The plant involves a number of coupled unit operations such...

  9. Models for Decision Making: From Applications to Mathematics... and Back

    OpenAIRE

    Crama, Yves

    2010-01-01

    In this inaugural lecture, I describe some facets of the interplay between mathematics and management science, economics, or engineering, as they come together in operations research models. I intend to illustrate, in particular, the complex and fruitful process through which fundamental combinatorial models find applications in management science, which in turn foster the development of new and challenging mathematical questions.

  10. Application of the RADTRAN 5 stop model

    International Nuclear Information System (INIS)

    A number of environmental impact analyses with the RADTRAN computer code have shown that dose to persons at stops is one of the largest components of incident-free dose during overland carriage of spent fuel and other radioactive materials (e.g., USDOE, 1994). The input data used in these analyses were taken from a 1983 study that reports actual observations of spent fuel shipments by truck. Early RADTRAN stop models, however, were insufficiently flexible to take advantage of the detailed information in the study. A more recent study of gasoline service stations that specialize in servicing large trucks, which are the most likely stop locations for shipments of Type B packages in the United States, has provided additional, detailed data on refueling/meal stops. The RADTRAN 5 computer code for transportation risk analysis allows exposures at stops to be more fully modeled than have previous releases of the code and is able to take advantage of detailed data. It is the intent of this paper first to compare results from RADTRAN and RADTRAN 5 for the old, low-resolution form of input data, and then to demonstrate what effect the new data and input format have on stop-dose estimates for an individual stop and for a hypothetical shipment route. Finally, these estimated public doses will be contrasted with doses calculated for a special population group -- inspectors

  11. NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN

    Directory of Open Access Journals (Sweden)

    Petr Chmátal

    2016-04-01

    Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.

  12. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  13. Solutions manual to accompany finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    A solutions manual to accompany Finite Mathematics: Models and Applications In order to emphasize the main concepts of each chapter, Finite Mathematics: Models and Applications features plentiful pedagogical elements throughout such as special exercises, end notes, hints, select solutions, biographies of key mathematicians, boxed key principles, a glossary of important terms and topics, and an overview of use of technology. The book encourages the modeling of linear programs and their solutions and uses common computer software programs such as LINDO. In addition to extensive chapters on pr

  14. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed....... The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool the MFM Suite. MFM applications in nuclear power systems are described by two examples a PWR and a FBRreactor. The PWR example show how MFM can be used to model and reason about...

  15. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  16. Application of computers for obtaining numerical solutions to compartmental models

    International Nuclear Information System (INIS)

    The application of compartmental analysis to the interpretation of metabolic studies has contributed to the advanced state of fundamental research. However, mathematical models usually require a large number of calculations to be performed. Fortunately, machines have become available to assist with the calculations, record keeping, and drawing graphs illustrating results. This paper describes some of the applications of computer hardware and software to the analysis of compartmental kinetics and the associated problems which the user may encounter. Computers speed up the fitting of models to data, trying new models, comparing the results of competing results. they make such work cheaper, less tedious, and much more convenient

  17. Minimizing Drilling Thrust Force for HFRP Composite by Optimizing Process Parameters using Combination of ANOVA Approach and S/N Ratios Analysis

    Directory of Open Access Journals (Sweden)

    Maoinser Mohd Azuwan

    2014-07-01

    Full Text Available Drilling hybrid fiber reinforced polymer (HFRP composite is a novel approach in fiber reinforced polymer (FRP composite machining studies as this material combining two different fibers in a single matrix that resulted in considerable improvement in mechanical properties and cost saving as compared to conventional fiber composite material. This study presents the development and optimized way of drilling HFRP composite at various drilling parameters such as drill point angle, feed rate and cutting speed by using the full factorial design experiment with the combination of analysis of variance (ANOVA approach and signal to noise (S/N ratio analysis. The results identified optimum drilling parameters for drilling the HFRP composite using small drill point angle at low feed rate and medium cutting speed that resulted in lower thrust force.

  18. Applications of covariance structure modeling in psychology: cause for concern?

    Science.gov (United States)

    Breckler, S J

    1990-03-01

    Methods of covariance structure modeling are frequently applied in psychological research. These methods merge the logic of confirmatory factor analysis, multiple regression, and path analysis within a single data analytic framework. Among the many applications are estimation of disattenuated correlation and regression coefficients, evaluation of multitrait-multimethod matrices, and assessment of hypothesized causal structures. Shortcomings of these methods are commonly acknowledged in the mathematical literature and in textbooks. Nevertheless, serious flaws remain in many published applications. For example, it is rarely noted that the fit of a favored model is identical for a potentially large number of equivalent models. A review of the personality and social psychology literature illustrates the nature of this and other problems in reported applications of covariance structure models. PMID:2320704

  19. Application of Generic Disposal System Models

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.

  20. Numerical modeling of complex heat transfer phenomena in cooling applications

    OpenAIRE

    Hou, Xiaofei

    2015-01-01

    Multiphase and multicomponent flows are frequently encountered in the cooling applications due to combined heat transfer and phase change phenomena. Two-fluid and homogeneous mixture models are chosen to numerically study these flows in the cooling phenomena. Therefore this work is divided in two main parts. In the first part, a two-fluid model algorithm for free surface flows is presented. The two fluid model is usually used as a tool to simulate dispersed flow. With its extension, it may al...

  1. Dependent Risk Modelling and Ruin Probability: Numerical Computation and Applications

    OpenAIRE

    Zhao, Shouqi

    2014-01-01

    In this thesis, we are concerned with the finite-time ruin probabilities in two alternative dependent risk models, the insurance risk model and the dual risk model, including the numerical evaluation of the explicit expressions for these quantities and the application of the probabilistic results obtained. We first investigate the numerical properties of the formulas for the finite-time ruin probability derived by Ignatov and Kaishev (2000, 2004) and Ignatov et al. (2001) for a generalized in...

  2. Copula bivariate probit models: with an application to medical expenditures

    OpenAIRE

    Winkelmann, Rainer

    2011-01-01

    The bivariate probit model is frequently used for estimating the eff*ect of an endogenous binary regressor (the "treatment") on a binary health outcome variable. This paper discusses simple modifi*cations that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank ...

  3. Computable Equilibrium Modelling and Application to Economies in Transition

    OpenAIRE

    Erno Zalai

    1998-01-01

    This paper examines the development and implementation of computable general equilibrium (CGE) models and examines their application to economies undergoing transition. The generalised development of a CGE model is presented in terms of the series of 'building blocks' which comprise a typical CGE system, whilst the flexibility of the CGE approach is illustrated by comparison of two specific CGE models: the GEM-E3 framework, developed by a team of researchers, led by Professor Pantelis Capros,...

  4. Real Estate Rental Payments: Application of Stock-Inventory Modeling

    OpenAIRE

    Philip McCann; Charles Ward

    2004-01-01

    This paper analyzes the rental term structure taking into account the opportunity costs faced by the tenant for varying lease lengths. The analysis involves the application of a multi-period stock inventory model. The implication of the model is that the term structure of rents is determined by a clientele effect that can bias the occupancy value derived from using rational-expectations in the term structure relationship. The model does, however, reveal the characteristic stock-inventory U-sh...

  5. Structural Equation Modeling: Theory and Applications in Forest Management

    OpenAIRE

    Tzeng Yih Lam; Douglas A. Maguire

    2012-01-01

    Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM). The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1) a path model repres...

  6. Top-down enterprise application integration with reference models

    OpenAIRE

    Willem-Jan van den Heuvel; Wilhelm Hasselbring; Mike Papazoglou

    2000-01-01

    For Enterprise Resource Planning (ERP) systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference ...

  7. Dynamic reactor modeling with applications to SPR and ZEDNA.

    Energy Technology Data Exchange (ETDEWEB)

    Suo-Anttila, Ahti Jorma

    2011-12-01

    A dynamic reactor model has been developed for pulse-type reactor applications. The model predicts reactor power, axial and radial fuel expansion, prompt and delayed neutron population, and prompt and delayed gamma population. All model predictions are made as a function of time. The model includes the reactivity effect of fuel expansion on a dynamic timescale as a feedback mechanism for reactor power. All inputs to the model are calculated from first principles, either directly by solving systems of equations, or indirectly from Monte Carlo N-Particle Transport Code (MCNP) derived results. The model does not include any empirical parameters that can be adjusted to match experimental data. Comparisons of model predictions to actual Sandia Pulse Reactor SPR-III pulses show very good agreement for a full range of pulse magnitudes. The model is also applied to Z-pinch externally driven neutron assembly (ZEDNA) type reactor designs to model both normal and off-normal ZEDNA operations.

  8. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  9. Solar radiation practical modeling for renewable energy applications

    CERN Document Server

    Myers, Daryl Ronald

    2013-01-01

    Written by a leading scientist with over 35 years of experience working at the National Renewable Energy Laboratory (NREL), Solar Radiation: Practical Modeling for Renewable Energy Applications brings together the most widely used, easily implemented concepts and models for estimating broadband and spectral solar radiation data. The author addresses various technical and practical questions about the accuracy of solar radiation measurements and modeling. While the focus is on engineering models and results, the book does review the fundamentals of solar radiation modeling and solar radiation m

  10. Top-Down Enterprise Application Integration with Reference Models

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2000-11-01

    Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

  11. NDA SYSTEM RESPONSE MODELING AND ITS APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    Vinson, D.

    2010-03-01

    is of the form of uranyl fluoride that will become hydrated on exposure to moisture in air when the systems are no longer buffered. The deposit geometry and thickness is uncertain and variable. However, a reasonable assessment of the level of material holdup in this equipment is necessary to support decommissioning efforts. The assessment of nuclear material holdup in process equipment is a complex process that requires integration of process knowledge, nondestructive assay (NDA) measurements, and computer modeling to maximize capabilities and minimize uncertainty. The current report is focused on the use of computer modeling and simulation of NDA measurements.

  12. Web Applications Security : A security model for client-side web applications

    OpenAIRE

    Prabhakara, Deepak

    2009-01-01

    The Web has evolved to support sophisticated web applications. These web applications are exposed to a number of attacks and vulnerabilities. The existing security model is unable to cope with these increasing attacks and there is a need for a new security model that not only provides the required security but also supports recent advances like AJAX and mashups. The attacks on client-side Web Applications can be attributed to four main reasons – 1) lack of a security context for Web Browsers...

  13. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  14. Generalized Additive Modelling of Mixed Distribution Markov Models with Application to Melbourne's Rainfall.

    OpenAIRE

    Hyndman, R. J.; Grunwald, G. K.

    1999-01-01

    We consider modelling time series using a generalized additive model with first- order Markov structure and mixed transition density having a discrete component at zero and a continuous component with positive sample space. Such models have application, for example, in modelling daily occurrence and intensity of rainfall, and in modelling the number and size of insurance claims. We show how these methods extend the usual sinusoidal seasonal assumption in standard chain- dependent models by as...

  15. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    group parameter is missing, the atom connectivity based model is employed to predict the missing group interaction. In this way, a wide application range of the property modeling tool is ensured. Based on the property models, targeted computer-aided techniques have been developed for design and analysis...... of organic chemicals, polymers, mixtures as well as separation processes. The presentation will highlight the framework (ICAS software) for property modeling, the property models and issues such as prediction accuracy, flexibility, maintenance and updating of the database. Also, application issues......Physical-chemical properties of pure chemicals and their mixtures play an important role in the design of chemicals based products and the processes that manufacture them. Although, the use of experimental data in design and analysis of chemicals based products and their processes is desirable...

  16. Interconnected hydro-thermal systems - Models, methods, and applications

    DEFF Research Database (Denmark)

    Hindsberger, Magnus

    2003-01-01

    Baltic Sea Region. They are characterised by having a mix of hydroelectric and thermal based production units, where the latter type includes the combined heat and power (CHP) plants that are widely used in e.g. Denmark and Finland. Focus is on the medium- to long-term perspective, i.e. within a time......, it has been analysed how the Balmorel model can be used to create inputs related to transmissions and/or prices to a more detailed production scheduling model covering a subsystem of the one represented in the Balmorel model. As an example of application of the Balmorel model, the dissertation...... horizon of about 1 to 30 years. A main topic in the dissertation is the Balmorel model. Apart from the actual model, analyses of how to represent different elements appropriately in the model are presented. Most emphasis is on the representation of time and the modelling of various production units. Also...

  17. Theory and application of experimental model analysis in earthquake engineering

    Science.gov (United States)

    Moncarz, P. D.

    The feasibility and limitations of small-scale model studies in earthquake engineering research and practice is considered with emphasis on dynamic modeling theory, a study of the mechanical properties of model materials, the development of suitable model construction techniques and an evaluation of the accuracy of prototype response prediction through model case studies on components and simple steel and reinforced concrete structures. It is demonstrated that model analysis can be used in many cases to obtain quantitative information on the seismic behavior of complex structures which cannot be analyzed confidently by conventional techniques. Methodologies for model testing and response evaluation are developed in the project and applications of model analysis in seismic response studies on various types of civil engineering structures (buildings, bridges, dams, etc.) are evaluated.

  18. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  19. Multimedia Teleservices Modelled with the OSI Application Layer Structure

    OpenAIRE

    Rijssen, van, H.J.; Widya, Ing; Michiels, Eddy

    1995-01-01

    This paper looks into the communications capabilities that are required by distributed multimedia applications to achieve relation preserving information exchange. These capabilities are derived by analyzing the notion of information exchange and are embodied in communications functionalities. To emphasize the importance of the users' view, a top-down approach is applied. The (revised) OSI Application Layer Structure (OSI-ALS) is used to model the communications functionalities and to develop...

  20. Studying and modelling variable density turbulent flows for industrial applications

    International Nuclear Information System (INIS)

    Industrial applications are presented in the various fields of interest for EDF. A first example deals with transferred electric arcs couplings flow and thermal transfer in the arc and in the bath of metal and is related with applications of electricity. The second one is the combustion modelling in burners of fossil power plants. The last one comes from the nuclear power plants and concerns the stratified flows in a nuclear reactor building. (K.A.)

  1. Serving Embedded Content via Web Applications: Model, Design and Experimentation

    OpenAIRE

    Duquennoy, Simon; Grimaud, Gilles; Vandewalle, Jean-Jacques

    2009-01-01

    Embedded systems such as smart cards or sensors are now widespread, but are often closed systems, only accessed via dedicated terminals. A new trend consists in embedding Web servers in small devices, making both access and application development easier. In this paper, we propose a TCP performance model in the context of embedded Web servers, and we introduce a taxonomy of the contents possibly served by Web applications. The main idea of this paper is to adapt the communication stack behavi...

  2. Model-driven design of context-aware applications

    OpenAIRE

    Shishkov, B.B.; Sinderen, van, Marten; Cardoso, J.; Cordeiro, J.; Filipe, J.

    2007-01-01

    In many cases, in order to be effective, software applications need to allow sensitivity to context changes. This implies however additional complexity associated with the need for applications’ adaptability (being capable of capturing context, interpreting it and reacting on it). Hence, we envision 3 ‘musts’ that, in combination, are especially relevant to the design of context-aware applications. Firstly, at the business modeling level, it is considered crucial that the different possible c...

  3. TASS Model Application for Testing the TDWAP Model

    Science.gov (United States)

    Switzer, George F.

    2009-01-01

    One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.

  4. Database application for changing data models in environmental engineering

    Energy Technology Data Exchange (ETDEWEB)

    Hussels, Ulrich; Camarinopoulos, Stephanos; Luedtke, Torsten; Pampoukis, Georgios [RISA Sicherheitsanalysen GmbH, Berlin-Charlottenburg (Germany)

    2013-07-01

    Whenever a technical task is to be solved with the help of a database application and uncertainties regarding the structure, scope or level of detail of the data model exist (either currently or in the future) the use of a generic database application can reduce considerably the cost of implementation and maintenance. Simultaneously the approach described in this contribution permits the operation with different views on the data and even finding and defining new views which had not been considered before. The prerequisite for this is that the preliminary information (structure as well as data) stored into the generic application matches the intended use. In this case, parts of the generic model developed with the generic approach can be reused and according efforts for a major rebuild can be saved. This significantly reduces the development time. At the same time flexibility is achieved concerning the environmental data model, which is not given in the context of conventional developments. (orig.)

  5. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  6. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  7. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  8. STES applications model (SAM) user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Timmer, A.M.

    1979-01-04

    This document is a user's guide for the STES Applications Model (SAM) which can be used to identify industrial applications which are good candidates for Solar Total Energy Systems (STES). SAM computes and ranks equivalent cost ratio and calculates fuel displacement potential by geographic location (50 states) and by industrial application (140 three digit SIC categories) for seven time periods (from 1985 to 2015 in five year increments). SAM is written in FORTRAN for the FTN compiler on the CDC 7600 computer.

  9. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  10. Animal models of enterovirus 71 infection: applications and limitations.

    Science.gov (United States)

    Wang, Ya-Fang; Yu, Chun-Keung

    2014-01-01

    Human enterovirus 71 (EV71) has emerged as a neuroinvasive virus that is responsible for several outbreaks in the Asia-Pacific region over the past 15 years. Appropriate animal models are needed to understand EV71 neuropathogenesis better and to facilitate the development of effective vaccines and drugs. Non-human primate models have been used to characterize and evaluate the neurovirulence of EV71 after the early outbreaks in late 1990s. However, these models were not suitable for assessing the neurovirulence level of the virus and were associated with ethical and economic difficulties in terms of broad application. Several strategies have been applied to develop mouse models of EV71 infection, including strategies that employ virus adaption and immunodeficient hosts. Although these mouse models do not closely mimic human disease, they have been applied to determine the pathogenesis of and treatment and prevention of the disease. EV71 receptor-transgenic mouse models have recently been developed and have significantly advanced our understanding of the biological features of the virus and the host-parasite interactions. Overall, each of these models has advantages and disadvantages, and these models are differentially suited for studies of EV71 pathogenesis and/or the pre-clinical testing of antiviral drugs and vaccines. In this paper, we review the characteristics, applications and limitation of these EV71 animal models, including non-human primate and mouse models. PMID:24742252

  11. Mathematical applications and modelling yearbook 2010, Association of Mathematics Educators

    CERN Document Server

    Scientific, World

    2010-01-01

    Mathematical Applications and Modelling is the second in the series of the yearbooks of the Association of Mathematics Educators in Singapore. The book is unique as it addresses a focused theme on mathematics education. The objective is to illustrate the diversity within the theme and present research that translates into classroom pedagogies.The book, comprising of 17 chapters, illuminates how application and modelling tasks may help develop the capacity of students to use mathematics in their present and future lives. Several renowned international researchers in the field of mathematical mo

  12. WWW Business Applications Based on the Cellular Model

    Institute of Scientific and Technical Information of China (English)

    Toshio Kodama; Tosiyasu L. Kunii; Yoichi Seki

    2008-01-01

    A cellular model based on the Incrementally Modular Abstraction Hierarchy (IMAH) is a novel model that can represent the architecture of and changes in cyberworlds, preserving invariants from a general level to a specific one. We have developed a data processing system called the Cellular Data System (CDS). In the development of business applications, you can prevent combinatorial explosion in the process of business design and testing by using CDS. In this paper, we have first designed and implemented wide-use algebra on the presentation level. Next, we have developed and verified the effectiveness of two general business applications using CDS: 1) a customer information management system, and 2) an estimate system.

  13. Recent Applications of Mesoscale Modeling to Nanotechnology and Drug Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Maiti, A; Wescott, J; Kung, P; Goldbeck-Wood, G

    2005-02-11

    Mesoscale simulations have traditionally been used to investigate structural morphology of polymer in solution, melts and blends. Recently we have been pushing such modeling methods to important areas of Nanotechnology and Drug delivery that are well out of reach of classical molecular dynamics. This paper summarizes our efforts in three important emerging areas: (1) polymer-nanotube composites; (2) drug diffusivity through cell membranes; and (3) solvent exchange in nanoporous membranes. The first two applications are based on a bead-spring-based approach as encoded in the Dissipative Particle Dynamics (DPD) module. The last application used density-based Mesoscale modeling as implemented in the Mesodyn module.

  14. Copula bivariate probit models: with an application to medical expenditures.

    Science.gov (United States)

    Winkelmann, Rainer

    2012-12-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. PMID:22025413

  15. Applicability of cooperative learning model in gastronomy education

    OpenAIRE

    SARIOĞLAN, Mehmet; CEVİZKAYA, Gülhan

    2016-01-01

    The purpose of the study is to reveal of “Cooperative learning model’s applicability which is one of the vital models of gastronomy. Learning model that is based on cooperativisim, have importance for students in terms of being successful in their group Works at gastronomy education. This study divides into two parts, one is “literature” and other is “model proposal”. At scanning of “literature” is going to be focused on cooperative learning model gastronomy education’s description. In the se...

  16. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  17. Metabolic modeling of clostridia: current developments and applications.

    Science.gov (United States)

    Dash, Satyakam; Ng, Chiam Yu; Maranas, Costas D

    2016-02-01

    Anaerobic Clostridium spp. is an important bioproduction microbial genus that can produce solvents and utilize a broad spectrum of substrates including cellulose and syngas. Genome-scale metabolic (GSM) models are increasingly being put forth for various clostridial strains to explore their respective metabolic capabilities and suitability for various bioconversions. In this study, we have selected representative GSM models for six different clostridia (Clostridium acetobutylicum, C. beijerinckii, C. butyricum, C. cellulolyticum, C. ljungdahlii and C. thermocellum) and performed a detailed model comparison contrasting their metabolic repertoire. We also discuss various applications of these GSM models to guide metabolic engineering interventions as well as assessing cellular physiology. PMID:26755502

  18. Handbook of Real-World Applications in Modeling and Simulation

    CERN Document Server

    Sokolowski, John A

    2012-01-01

    This handbook provides a thorough explanation of modeling and simulation in the most useful, current, and predominant applied areas, such as transportation, homeland security, medicine, operational research, military science, and business modeling.  The authors offer a concise look at the key concepts and techniques of modeling and simulation and then discuss how and why the presented domains have become leading applications.  The book begins with an introduction of why modeling and simulation is a reliable analysis assessment tool for complex syste

  19. CAD-model-based vision for space applications

    Science.gov (United States)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  20. Constitutive Modeling of Geomaterials Advances and New Applications

    CERN Document Server

    Zhang, Jian-Min; Zheng, Hong; Yao, Yangping

    2013-01-01

    The Second International Symposium on Constitutive Modeling of Geomaterials: Advances and New Applications (IS-Model 2012), is to be held in Beijing, China, during October 15-16, 2012. The symposium is organized by Tsinghua University, the International Association for Computer Methods and Advances in Geomechanics (IACMAG), the Committee of Numerical and Physical Modeling of Rock Mass, Chinese Society for Rock Mechanics and Engineering, and the Committee of Constitutive Relations and Strength Theory, China Institution of Soil Mechanics and Geotechnical Engineering, China Civil Engineering Society. This Symposium follows the first successful International Workshop on Constitutive Modeling held in Hong Kong, which was organized by Prof. JH Yin in 2007.   Constitutive modeling of geomaterials has been an active research area for a long period of time. Different approaches have been used in the development of various constitutive models. A number of models have been implemented in the numerical analyses of geote...

  1. CFD code fluent turbulence models application. Ansaldo's prototype modeling

    International Nuclear Information System (INIS)

    Among others, one of the main activities in the Nuclear Engineering and Fluid Mechanics Department of the Engineering School in Bilbao, is the study of liquid metals behavior. And for this purpose the CFD code FLUENT is being used. Currently, the code is being applied to the use of Lead-Bismuth eutectic (LBE) as the coolant of an accelerator driven system (ADS) and also as the target for a neutron source. In this paper, ANSALDO's Energy Amplifier Demonstration Facility is simulated, paying attention only on the coolant. As it will be later explained, natural convection is a very important issue, because the philosophy for safety systems in nuclear devices tends to consider passive technologies. The purpose is to avoid electrical machines like pumps, so the core should remain coolable, even if there is a blackout. To get this natural circulation, heat transfer plays a main role, and as turbulence enhances the heat transfer, it is important to choose a good turbulence model to correctly simulate this ADS's coolant system. (author)

  2. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  3. A Comparison of Three Programming Models for Adaptive Applications

    Science.gov (United States)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  4. Monte Carlo methods and applications for the nuclear shell model

    OpenAIRE

    Dean, D. J.; White, J A

    1998-01-01

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  5. Monte Carlo Methods and Applications for the Nuclear Shell Model

    International Nuclear Information System (INIS)

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems

  6. Application of dimensional analysis in systems modeling and control design

    CERN Document Server

    Balaguer, Pedro

    2013-01-01

    Dimensional analysis is an engineering tool that is widely applied to numerous engineering problems, but has only recently been applied to control theory and problems such as identification and model reduction, robust control, adaptive control, and PID control. Application of Dimensional Analysis in Systems Modeling and Control Design provides an introduction to the fundamentals of dimensional analysis for control engineers, and shows how they can exploit the benefits of the technique to theoretical and practical control problems.

  7. Improved dual sided doped memristor: modelling and applications

    OpenAIRE

    Anup Shrivastava; Muhammad Khalid; Komal Singh; Jawar Singh

    2014-01-01

    Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better res...

  8. APPLICATION OF REGRESSION MODELLING TECHNIQUES IN DESALINATION OF SEA WATER BY MEMBRANE DISTILLATION

    Directory of Open Access Journals (Sweden)

    SELVI S. R

    2015-08-01

    Full Text Available The objective of this work is to gain an idea about the statistical significance of experimental parameters on the performance of membrane distillation. In this work the raw sea water sample without pretreatment was collected from Puducherry and desalinated using direct contact membrane distillation method. Experimental data analysis was carried out using statistical methods. The experimental data involves the effects of feed temperature, feed flow rate and feed concentration on the permeate flux. In statistical methods, regression model was developed to correlate the significance of input parameters like feed temperature, feed concentration and feed flow rate with the output parameter like permeate flux in the process of membrane distillation. Since the performance of the membrane distillation in the desalination of water is characterised by permeate flux, regression model using simple linear method was carried out. Goodness of model fitting should always has to be validated. Regression model was validated using ANOVA. Estimates of ANOVA for the parameter study was given and the coefficient obtained by regression analysis was specified in the regression equation and concluded that the highest coefficient of input parameter is significant, highly influences the response. Feed flow rate and feed temperature has higher influence on permeate flux than that of feed concentration. The coefficient of feed concentration was found to be negative which indicates less significant factor on permeate flux. The chemical composition of sea water was given by water quality analysis . TDS of membrane distilled water was found to be 18ppm than the initial feed TDS of sea water 27,720 ppm. From the experimental work it was found, salt rejection as 99% and water analysis report confirms the quality of distillate obtained by this desalination process as potable water.

  9. A spatial haplotype copying model with applications to genotype imputation.

    Science.gov (United States)

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data. PMID:25526526

  10. Nonlinear Mathematical Modeling in Pneumatic Servo Position Applications

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Valdiero

    2011-01-01

    Full Text Available This paper addresses a new methodology for servo pneumatic actuators mathematical modeling and selection from the dynamic behavior study in engineering applications. The pneumatic actuator is very common in industrial application because it has the following advantages: its maintenance is easy and simple, with relatively low cost, self-cooling properties, good power density (power/dimension rate, fast acting with high accelerations, and installation flexibility. The proposed fifth-order nonlinear mathematical model represents the main characteristics of this nonlinear dynamic system, as servo valve dead zone, air flow-pressure relationship through valve orifice, air compressibility, and friction effects between contact surfaces in actuator seals. Simulation results show the dynamic performance for different pneumatic cylinders in order to see which features contribute to a better behavior of the system. The knowledge of this behavior allows an appropriate choice of pneumatic actuator, mainly contributing to the success of their precise control in several applications.

  11. Application of Active Contour Model in Tracking Sequential Nearshore Waves

    Institute of Scientific and Technical Information of China (English)

    Yu-Hung HSIAO; Min-Chih HUANG

    2009-01-01

    In the present study,a generalized active contour model of gradient vector flow is combined with the video techniques of Argus system to delineate and track sequential nearshore wave crest profdes in the shoaling process,up to their breaking on the shorehne.Previous applications of active contour models to water wave problems are limited to controllable wave tank experiments.By contrast,our application in this study is in a nearshore field environment where oblique images obtained under natural and varying condition of ambient light are employed.Existing Argus techniques produce plane image data or time series data from a selected small subset of discrete pixels.By contrast,the active contour model produces line image data along continuous visible curves such as wave crest profdes.The combination of these two existing techniques,the active contour model and Argus methodologies,facilitates the estimates of the direction wave field and phase speeds within the whole area covered by camera.These estimates are useful for the purpose of inverse calculation of the water depth.Applications of the present techniques to Hsi-tzu bay where a beach restoration program is currently undertaken are illustrated.This extension of Argus video techniques provides new application of optical remote sensing to study the hydrodynamics and morphology of a nearshore environment.

  12. Microwave applicator for hyperthermia treatment on in vivo melanoma model

    Czech Academy of Sciences Publication Activity Database

    Togni, P.; Vrba, J.; Vannucci, Luca

    2010-01-01

    Roč. 48, č. 3 (2010), s. 285-292. ISSN 0140-0118 R&D Projects: GA AV ČR(CZ) IAA500200510 Institutional research plan: CEZ:AV0Z50200510 Keywords : Melanoma in vivo model * Superficial hyperthermia * Microwave applicator Subject RIV: EC - Immunology Impact factor: 1.791, year: 2010

  13. Theoretical outdoor noise propagation models: Application to practical predictions

    Science.gov (United States)

    Tuominen, H. T.; Lahti, T.

    1982-02-01

    The theoretical calculation approaches for outdoor noise propagation are reviewed. Possibilities for their application to practical engineering calculations are outlined. A calculation procedure, which is a combination and extension of several theoretical models, is described. Calculation examples are compared with the results of some propagation studies.

  14. Risk Measurement and Risk Modelling using Applications of Vine Copulas

    NARCIS (Netherlands)

    D.E. Allen (David); M.J. McAleer (Michael); A.K. Singh (Abhay)

    2014-01-01

    markdownabstract__abstract__ This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite nancial risk. Copula-based dependence modelling is a popular tool in nancial applica

  15. Degradation modeling with application to aging and maintenance effectiveness evaluations

    International Nuclear Information System (INIS)

    This paper describes a modeling approach to analyze component degradation and failure data to understand the aging process of components. As used here, degradation modeling is the analysis of information on component degradation in order to develop models of the process and its implications. This particular modeling focuses on the analysis of the times of component degradations, to model how the rate of degradation changes with the age of the component. The methodology presented also discusses the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of component degradation rates and component failure rates from plant-specific data. The statistical techniques which are developed and applied allow aging trends to be effectively identified in the degradation data, and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures also are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends. 2 refs., 8 figs

  16. Application of nonlinear tyre models to analyse shimmy

    Science.gov (United States)

    Ran, Shenhai; Besselink, I. J. M.; Nijmeijer, H.

    2014-05-01

    This paper focuses on the application of different tyre models to analyse the shimmy phenomenon. Tyre models with the Magic Formula and a non-constant relaxation length are introduced. The energy flow method is applied to compare these tyre models. A trailing wheel suspension is used to analyse shimmy stability and to evaluate the differences between tyre models. Linearisation and nonlinear techniques, including bifurcation analysis, are applied to analyse this system. Extending the suspension model with lateral flexibility and structural damping reveals more information on shimmy stability. Although the nonlinear tyre models do not change the stability of equilibria, they determine the magnitude of the oscillation. It is concluded that the non-constant relaxation length should be included in the shimmy analysis for more accurate results at large amplitude.

  17. Extending the Interaction Flow Modeling Language (IFML) for Model Driven Development of Mobile Applications Front End

    OpenAIRE

    Brambilla, Marco; Mauri, Andrea; Umuhoza, Eric

    2014-01-01

    Front-end design of mobile applications is a complex and multidisciplinary task, where many perspectives intersect and the user experience must be perfectly tailored to the application objectives. However, development of mobile user interactions is still largely a manual task, which yields to high risks of errors, inconsistencies and ine ciencies. In this paper we propose a model-driven approach to mobile application development based on the IFML standard. We propose an extension of the Inter...

  18. Application of existing design software to problems in neuronal modeling.

    Science.gov (United States)

    Vranić-Sowers, S; Fleshman, J W

    1994-03-01

    In this communication, we describe the application of the Valid/Analog Design Tools circuit simulation package called PC Workbench to the problem of modeling the electrical behavior of neural tissue. A nerve cell representation as an equivalent electrical circuit using compartmental models is presented. Several types of nonexcitable and excitable membranes are designed, and simulation results for different types of electrical stimuli are compared to the corresponding analytical data. It is shown that the hardware/software platform and the models developed constitute an accurate, flexible, and powerful way to study neural tissue. PMID:8045583

  19. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  20. Application of thermospheric general circulation models for space weather operations

    Science.gov (United States)

    Fuller-Rowell, T.; Minter, C.; Codrescu, M.

    Solar irradiance is the dominant source of heat, ionization, and dissociation of the thermosphere, and to a large extent drives the global dynamics, and controls the neutral composition and density structure. Neutral composition is important for space weather applications because of its impact on ionospheric loss rates, and neutral density is critical for satellite drag prediction. The future for thermospheric general circulation models for space weather operations lies in their use as state propagators in data assimilation techniques. The physical models can match empirical models in accuracy provided accurate drivers are available, but their true value comes when combined with data in an optimal way. Two such applications have recently been developed. The first utilizes a Kalman filter to combine space-based observation of airglow with physical model predictions to produce global maps of neutral composition. The output of the filter will be used within the GAIM (Global Assimilation of Ionospheric Measurement) model developed under a parallel effort. The second filter uses satellite tracking and remote sensing data for specification of neutral density. Both applications rely on accurate estimates of the solar EUV and magnetospheric drivers.

  1. Model Oriented Application Generation for Industrial Control Systems

    CERN Document Server

    Copy, B; Blanco Vinuela, E; Fernandez Adiego, B; Nogueira Ferandes, R; Prieto Barreiro, I

    2011-01-01

    The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology and a collection of development tools that standardizes the design of industrial control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to ease extensibility and maintenance of the framework, introducing a stable metamodel, a set of platformindependent models and platformspecific configurations against which code generation plugins and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider and SIEMENS PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UNICOS metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuratio...

  2. The Application of the Jerome Model and the Horace Model in Translation Practice

    Institute of Scientific and Technical Information of China (English)

    WU Jiong

    2015-01-01

    The Jerome model and the Horace model have a great influence on translation theories and practice from ancient times. This paper starts from a comparative study of the two models, and mainly discusses similarities, differences and weakness of them. And then, through the case study, it analyzes the application of the two models to English-Chinese translation. In the end, it draws a conclusion that generally accepted translation criterion does not exist, different types of texts require different transla⁃tion criterion.

  3. Application distribution model and related security attacks in VANET

    Science.gov (United States)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  4. Monte Carlo modelling of positron transport in real world applications

    International Nuclear Information System (INIS)

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  5. Medical applications of model-based dynamic thermography

    Science.gov (United States)

    Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech

    2001-03-01

    The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.

  6. Formal model based methodology for developing software for nuclear applications

    International Nuclear Information System (INIS)

    The approach used in model based design is to build the model of the system in graphical/textual language. In older model based design approach, the correctness of the model is usually established by simulation. Simulation which is analogous to testing, cannot guarantee that the design meets the system requirements under all possible scenarios. This is however possible if the modeling language is based on formal semantics so that the developed model can be subjected to formal verification of properties based on specification. The verified model can then be translated into an implementation through reliable/verified code generator thereby reducing the necessity of low level testing. Such a methodology is admissible as per guidelines of IEC60880 standard applicable to software used in computer based systems performing category A functions in nuclear power plant and would also be acceptable for category B functions. In this article, the experience in implementation and formal verification of important controllers used in the process control system of a nuclear reactor. We have used The SCADE (Safety Critical System Analysis and Design Environment) environment to model the controllers. The modeling language used in SCADE is based on the synchronous dataflow model of computation. A set of safety properties has been verified using formal verification technique

  7. Recent improvements in atmospheric environment models for Space Station applications

    Science.gov (United States)

    Anderson, B. Jeffrey; Suggs, Ronnie J.; Smith, Robert E.; Hickey, Michael; Catlett, Karen

    1991-01-01

    The capability of empirical models of the earth's thermosphere must continually be updated if they are to keep pace with their many applications in the aerospace industry. This paper briefly summarizes the progress of several such efforts in support of the Space Station Program. The efforts consists of the development of data bases, analytical studies of the data, and evaluation and intercomparison of thermosphere models. A geomagnetic storm model of Slowey does not compare as well to the MSIS-86 model as does the Marshall Engineering Thermosphere (MET). LDEF orbit decay data is used to evaluate the performance of the MET and MSIS-86 during a period of high solar activity; equal to or exceeding the highest levels that existed during the time of the original data sets upon which these models are based.

  8. Structural Equation Modeling: Theory and Applications in Forest Management

    Directory of Open Access Journals (Sweden)

    Tzeng Yih Lam

    2012-01-01

    Full Text Available Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM. The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1 a path model representing the effects of successively higher layers of vegetation on late-seral herbs through processes such as light attenuation and (2 a measurement model accounting for measurement errors. The fitted SEM model suggested a direct negative effect of light attenuation on late-seral herbs cover but a direct positive effect of northern aspect. Moreover, many processes have indirect effects mediated through midstory vegetation. SEM is recommended as a forest management tool for designing silvicultural treatments and systems for attaining complex arrays of management objectives.

  9. Development of aerosol models for NPP applications (AMY). Aerosol model development for nuclear applications

    International Nuclear Information System (INIS)

    AMY-project concentrates on understanding and modelling on deposition-resuspension phenomena of aerosols in pipe flow. The aim is to develop a calculation model that could resolve the current deficiencies in the aerosol deposition modelling in turbulent flows, and to implement the models into the tools that are used for calculating the fission product behaviour and release in severe reactor accidents. These tools are APROS SA, which is used for simulating the severe accident phenomena and progression of the accident, and SaTu (support system for radiation experts), which is originally designed to estimate radiation levels and radioactive releases during the accident situation. In addition to the deposition-resuspension model, other important models are to be implemented in the tools mentioned above. Revaporisation of deposited fission products from primary circuit surfaces may increase the releases into the reactor containment and further into the environment, and thus the phenomenon should be taken into account. To the SaTu system, models for estimating the environmental consequences will be implemented, as well, and the system will be modified to be able to describe nuclear power plants other than the Loviisa plant. Another important feature for source term calculations in PSA level 2 analyses is implementation of the uncertainty calculation environment in SaTu. (orig.)

  10. Model-Driven Development of Automation and Control Applications: Modeling and Simulation of Control Sequences

    Directory of Open Access Journals (Sweden)

    Timo Vepsäläinen

    2014-01-01

    Full Text Available The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process.

  11. Human eye modelling for ophthalmic simulators project for clinic applications

    International Nuclear Information System (INIS)

    Most of eye tumors are treated by surgical means, which involves the enucleation of affected eyes. In terms of treatment and control of diseases, there is brachytherapy, which often utilizes small applicator of Co-60, I-125, Ru-106, Ir-192, etc. These methods are shown to be very efficient but highly cost. The objective of this work is propose a detailed simulator modelling for eye characterization. Additionally, this study can contribute to design and build a new applicator in order to reduce the cost and to allow more patients to be treated

  12. Overview on available animal models for application in leukemia research

    International Nuclear Information System (INIS)

    The term ''leukemia'' encompasses a group of diseases with a variable clinical and pathological presentation. Its cellular origin, its biology and the underlying molecular genetic alterations determine the very variable and individual disease phenotype. The focus of this review is to discuss the most important guidelines to be taken into account when we aim at developing an ''ideal'' animal model to study leukemia. The animal model should mimic all the clinical, histological and molecular genetic characteristics of the human phenotype and should be applicable as a clinically predictive model. It should achieve all the requirements to be used as a standardized model adaptive to basic research as well as to pharmaceutical practice. Furthermore it should fulfill all the criteria to investigate environmental risk factors, the role of genomic mutations and be applicable for therapeutic testing. These constraints limit the usefulness of some existing animal models, which are however very valuable for basic research. Hence in this review we will primarily focus on genetically engineered mouse models (GEMMs) to study the most frequent types of childhood leukemia. GEMMs are robust models with relatively low site specific variability and which can, with the help of the latest gene modulating tools be adapted to individual clinical and research questions. Moreover they offer the possibility to restrict oncogene expression to a defined target population and regulate its expression level as well as its timely activity. Until recently it was only possible in individual cases to develop a murin model, which fulfills the above mentioned requirements. Hence the development of new regulatory elements to control targeted oncogene expression should be priority. Tightly controlled and cell specific oncogene expression can then be combined with a knock-in approach and will depict a robust murine model, which enables almost physiologic oncogene

  13. Modelling of transport processes in porous media for energy applications

    Energy Technology Data Exchange (ETDEWEB)

    Kangas, M.

    1996-12-31

    Flows in porous media are encountered in many branches of technology. In these phenomena, a fluid of some sort is flowing through porous matrix of a solid medium. Examples of the fluid are water, air, gas and oil. The solid matrix can be soil, fissured rock, ceramics, filter paper, etc. The flow is in many cases accompanied by transfer of heat or solute within the fluid or between the fluid and the surrounding solid matrix. Chemical reactions or microbiological processes may also be taking place in the system. In this thesis, a 3-dimensional computer simulation model THETA for the coupled transport of fluid, heat, and solute in porous media has been developed and applied to various problems in the field of energy research. Although also applicable to porous medium applications in general, the version of the model described and used in this work is intended for studying the transport processes in aquifers, which are geological formations containing groundwater. The model highlights include versatile input and output routines, as well as modularity which, for example, enables an easy adaptation of the model for use as a subroutine in large energy system simulations. Special attention in the model development has been attached to high flow conditions, which may be present in Nordic esker aquifers located close to the ground surface. The simulation model has been written with FORTRAN 77 programming language, enabling a seamless operation both in PC and main frame environments. For PC simulation, a special graphic user interface has been developed. The model has been used with success in a wide variety of applications, ranging from basic thermal analyses to thermal energy storage system evaluations and nuclear waste disposal simulations. The studies have shown that thermal energy storage is feasible also in Nordic high flow aquifers, although at the cost of lower recovery temperature level, usually necessitating the use of heat pumps. In the nuclear waste studies, it

  14. High-Fidelity Geometric Modelling for Biomedical Applications

    Energy Technology Data Exchange (ETDEWEB)

    Zeyun Yu, Michael Holst, and J.A. McCammon

    2008-04-01

    We describe a combination of algorithms for high fidelity geometric modeling and mesh generation. Although our methods and implementations are application-neutral, our primary target application is multiscale biomedical models that range in scales across the molecular, cellular, and organ levels. Our software toolchain implementing these algorithms is general in the sense that it can take as input a molecule in PDB/PQR forms, a 3D scalar volume, or a user-defined triangular surface mesh that may have very low quality. The main goal of our work presented is to generate high quality and smooth surface triangulations from the aforementioned inputs, and to reduce the mesh sizes by mesh coarsening. Tetrahedral meshes are also generated for finite element analysis in biomedical applications. Experiments on a number of bio-structures are demonstrated, showing that our approach possesses several desirable properties: feature-preservation, local adaptivity, high quality, and smoothness (for surface meshes). The availability of this software toolchain will give researchers in computational biomedicine and other modeling areas access to higher-fidelity geometric models.

  15. Language Model Applications to Spelling with Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Anderson Mora-Cortes

    2014-03-01

    Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  16. Intelligent control based on intelligent characteristic model and its application

    Institute of Scientific and Technical Information of China (English)

    吴宏鑫; 王迎春; 邢琰

    2003-01-01

    This paper presents a new intelligent control method based on intelligent characteristic model for a kind of complicated plant with nonlinearities and uncertainties, whose controlled output variables cannot be measured on line continuously. The basic idea of this method is to utilize intelligent techniques to form the characteristic model of the controlled plant according to the principle of combining the char-acteristics of the plant with the control requirements, and then to present a new design method of intelli-gent controller based on this characteristic model. First, the modeling principles and expression of the intelligent characteristic model are presented. Then based on description of the intelligent characteristic model, the design principles and methods of the intelligent controller composed of several open-loops and closed-loops sub controllers with qualitative and quantitative information are given. Finally, the ap-plication of this method in alumina concentration control in the real aluminum electrolytic process is in-troduced. It is proved in practice that the above methods not only are easy to implement in engineering design but also avoid the trial-and-error of general intelligent controllers. It has taken better effect in the following application: achieving long-term stable control of low alumina concentration and increasing the controlled ratio of anode effect greatly from 60% to 80%.

  17. Investigation of 1H NMR Profile of Vegetarian Human Urine Using ANOVA-based Multi-factor Analysis%素食人群尿液1H NMR代谢轮廓的多因素方差分析

    Institute of Scientific and Technical Information of China (English)

    董继扬; 邓伶莉; CHENG Kian-Kai; GRIFFIN Julian L.; 陈忠

    2011-01-01

    结合方差分析(ANOVA)和偏最小二乘法判别分析(PLS-DA)两种分析技术,对素食和普食人群的尿液1H NMR谱进行分析.利用ANOVA方法将数据矩阵分解为几个独立因素矩阵,滤除干扰因素后,再利用PLS-DA对单因素数据进行建模分析.实验结果表明,ANOVA/PLS-DA方法可以有效地减少饮食因素和性别因素之间的相互影响,使分析结果更具有生物学意义.%In this study, a technique that combined both analysis of variance ( ANOVA) and partial least squares-discriminant analysis (PLS-DA) was used to compare the urine XH NMR spectra of healthy people from a vegetarian and omnivorous population. In ANOVA/PLS-DA, the variation in data was first decomposed into different variance components that each contains a single source of variation. Each of the resulting variance components was then analyzed using PLS-DA. The experimental results showed that ANOVA/PLS-DA is efficient in disentangling the effect of diet and gender on die metabolic profile, and the method could be used to extract biologically relevant information for result interpretation.

  18. A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications

    CERN Document Server

    Wang, Jian-xun; Xiao, Heng

    2015-01-01

    Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...

  19. Modelling and application of the inactivation of microorganism

    International Nuclear Information System (INIS)

    Prevention of consuming contaminated food with toxic microorganisms causing infections and consideration of food protection and new microbial inactivation methods are obligatory situations. Food microbiology is mainly related with unwanted microorganisms spoiling foods during processing and transporting stages and causing diseases. Determination of pathogen microorganisms is important for human health to define and prevent dangers and elongate shelf life. Inactivation of pathogen microorganisms can provide food security and reduce nutrient losses. Microbial inactivation which is using methods of food protection such as food safety and fresh. With this aim, various methods are used such as classical thermal processes (pasteurisation, sterilisation), pressured electrical field (PEF), ionised radiation, high pressure, ultrasonic waves and plasma sterilisation. Microbial inactivation modelling is a secure and effective method in food production. A new microbiological application can give useful results for risk assessment in food, inactivation of microorganisms and improvement of shelf life. Application and control methods should be developed and supported by scientific research and industrial applications

  20. MOGO: Model-Oriented Global Optimization of Petascale Applications

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  1. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  2. Integrated climate modelling at the Kiel Institute for World Economics: The DART Model and its applications.

    OpenAIRE

    Deke, Oliver; Peterson, Sonja

    2003-01-01

    The aim of this paper is to give an overview over the DART model and its applications. The main focus is on the implementation of climate impacts into DART in the course of coupling DART to the ocean-atmosphere model and on the associated empirical problems. The basic DART model and some applications are presented in the next section. Section 3 describes in detail how the economic impacts of climate change on the agricultural sector and the impact of sea level rise are implemented in DART. Se...

  3. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  4. Data Warehouse Model For Mobile-Based Applications

    Directory of Open Access Journals (Sweden)

    Muhammad Shahbani Abu Bakar

    2016-06-01

    Full Text Available Analysis and design are very important roles in the Data Warehouse (DW system development and forms as a backbone of any successful or failure of the DW project. The emerging trends of analytic-based application required the DW system to be implemented in the mobile environment. However, current analysis and design approaches are based on existing DW environments that focusing on the deployment of the DW system in traditional web-based applications. This will create the limitations on user accessed and the used of analytical information by the decision makers. Consequently, this will prolong the adoption of analytic-based applications to the users and organizations. This research aims to suggest an approach for modeling the DW and design the DW system on the mobile environments. A variant dimension of modeling techniques was used to enhance the DW schemas in order to accommodate the requirements of mobile characteristics in the DW design. A proposed mobile DW system was evaluated by expert review, and support the success of mobile DW-based application implementation

  5. Biomass and bioenergy applications of the POLYSYS modeling framework

    International Nuclear Information System (INIS)

    The Policy Analysis System (POLYSYS) is a national simulation model of the US agriculture sector which can incorporate agricultural supply and demand and related modules to estimate agricultural production response, resource use, price, income, and environmental impacts of projected changes from an agricultural baseline. The framework recursively incorporates linear programming, econometric, and process models to estimate an impact path resulting from changes imposed on a baseline scenario and its underlying assumptions. POLYSYS estimates crop production and supply at a disaggregated regional level, whereby the 48 contiguous states are subdivided into 305 geographic regions with relatively homogeneous production characteristics. POLYSYS is capable of estimating a wide range of policy alternatives and economic and environmental conditions and simulations may be tailored to a variety of specific analytical needs. This paper presents a broad overview of the structure and approach of the POLYSYS model with emphasis on biomass and bioenergy related applications of the model. (author)

  6. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  7. Mathematical modelling of anaerobic digestion processes: applications and future needs

    DEFF Research Database (Denmark)

    Batstone, Damien J.; Puyol, Daniel; Flores Alsina, Xavier;

    2015-01-01

    of the role of the central carbon catabolic metabolism in anaerobic digestion, with an increased importance of phosphorous, sulfur, and metals as electron source and sink, and consideration of hydrogen and methane as potential electron sources. The paradigm of anaerobic digestion is challenged by anoxygenic...... phototrophism, where energy is relatively cheap, but electron transfer is expensive. These new processes are commonly not compatible with the existing structure of anaerobic digestion models. These core issues extend to application of anaerobic digestion in domestic plant-wide modelling, with the need......Anaerobic process modelling is a mature and well-established field, largely guided by a mechanistic model structure that is defined by our understanding of underlying processes. This led to publication of the IWA ADM1, and strong supporting, analytical, and extension research in the 15 years since...

  8. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  9. APPLICATION OF VARIABLE-FIDELITY MODELS TO AERODYNAMIC OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    XIA Lu; GAO Zheng-hong

    2006-01-01

    For aerodynamic shape optimization, the approximation management framework (AMF) method is used to organize and manage the variable-fidelity models. The method can take full advantage of the low-fidelity, cheaper models to concentrate the main workload on the low-fidelity models in optimization iterative procedure. Furthermore, it can take high-fidelity, more expensive models to monitor the procedure to make the method globally convergent to a solution of high-fidelity problem. Finally, zero order variable-fidelity aerodynamic optimization management framework and search algorithm are demonstrated on an airfoil optimization of UAV with a flying wing. Compared to the original shape, the aerodynamic performance of the optimal shape is improved. The results show the method has good feasibility and applicability.

  10. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  11. Voxel-based model and its application in advanced manufacturing

    Science.gov (United States)

    Wu, Xiaojun; Liu, Weijun; Wang, Tianran

    2004-03-01

    Traditionally, 3D models, even so called solid ones, can only represent the object's surface information, and the interior is regarded as homogeneous. In most applications, it is necessary to represent the interior structures and attributes of an object, such as materials, density and color, etc. Surface model is incapable of bearing this task. In this case, voxel model is a good choice. Voxelization is the process of converting a geometrically represented 3D object into a three dimensional volume of dataset. In this paper, an algorithm is proposed to voxelize the polygonal meshes ported from current CAD modeling packages into volume datasets based on the easily indexing property of Octree structure. The minimal distance to the feature voxel (or voxels) is taken as criterion to distribute different material compositions to get a new kind of material called FGM (functionally graded material), which is suitable for the interface of RPM (Rapid Prototyping Manufacturing).

  12. Spatial extended hazard model with application to prostate cancer survival.

    Science.gov (United States)

    Li, Li; Hanson, Timothy; Zhang, Jiajia

    2015-06-01

    This article develops a Bayesian semiparametric approach to the extended hazard model, with generalization to high-dimensional spatially grouped data. County-level spatial correlation is accommodated marginally through the normal transformation model of Li and Lin (2006, Journal of the American Statistical Association 101, 591-603), using a correlation structure implied by an intrinsic conditionally autoregressive prior. Efficient Markov chain Monte Carlo algorithms are developed, especially applicable to fitting very large, highly censored areal survival data sets. Per-variable tests for proportional hazards, accelerated failure time, and accelerated hazards are efficiently carried out with and without spatial correlation through Bayes factors. The resulting reduced, interpretable spatial models can fit significantly better than a standard additive Cox model with spatial frailties. PMID:25521422

  13. Sensors advancements in modeling, design issues, fabrication and practical applications

    CERN Document Server

    Mukhopadhyay, Subhash Chandra

    2008-01-01

    Sensors are the most important component in any system and engineers in any field need to understand the fundamentals of how these components work, how to select them properly and how to integrate them into an overall system. This book has outlined the fundamentals, analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors, electromagnetic, capacitive, ultrasonic, vision, Terahertz, displacement, fibre-optic and so on. The book: addresses the identification, modeling, selection, operation and integration of a wide variety of se

  14. Modeling lifetime of high power IGBTs in wind power applications

    DEFF Research Database (Denmark)

    Busca, Cristian

    2011-01-01

    The wind power industry is continuously developing bringing to the market larger and larger wind turbines. Nowadays reliability is more of a concern than in the past especially for the offshore wind turbines since the access to offshore wind turbines in case of failures is both costly and difficult...... an overview of the different aspects of lifetime modeling of high power IGBTs in wind power applications. In the beginning, wind turbine reliability survey results are briefly reviewed in order to gain an insight into wind turbine subassembly failure rates and associated downtimes. After that the...... most common high power IGBT failure mechanisms and lifetime prediction models are reviewed in more detail....

  15. Analytical model for a fast-response calorimeter: with applications

    International Nuclear Information System (INIS)

    This paper describes the development of an electrical analogue thermal-control model for the ANL-type fast-response calorimeter and its application to a new small sample, analytical-type fast-response calorimeter. This was done to obtain a better understanding of the sources of variations in experimentally measured sample power. Thermal quantities of temperature, heat flow and heat storage were reduced to electrical analogues so that the whole calorimeter could be modeled and analyzed as an electrical circuit with the thermal parts of the calorimeter treated as a series of lumped-circuit constants. Latest results of this work are discussed

  16. Application of Kalman Filter on modelling interest rates.

    OpenAIRE

    Long H. Vo

    2014-01-01

    This study aims to test the feasibility of using a data set of 90-day bank bill forward rates from the Australian market to predict spot interest rates. To achieve this goal I utilized the application of Kalman Filter in a state space model with time-varying state variable. It is documented that in the case of short-term interest rates,the state space model yields robust predictive power. In addition, this predictive power of implied forward rate is heavily impacted by the existence of a time...

  17. Application of Kalman Filter on modelling interest rates

    Directory of Open Access Journals (Sweden)

    Long H. Vo

    2014-03-01

    Full Text Available This study aims to test the feasibility of using a data set of 90-day bank bill forward rates from the Australian market to predict spot interest rates. To achieve this goal I utilized the application of Kalman Filter in a state space model with time-varying state variable. It is documented that in the case of short-term interest rates,the state space model yields robust predictive power. In addition, this predictive power of implied forward rate is heavily impacted by the existence of a time-varying risk premium in the term structure.

  18. Impact of Two Realistic Mobility Models for Vehicular Safety Applications

    OpenAIRE

    RAHMAN, Md. Habibur; Nasiruddin, Mohammad

    2014-01-01

    Vehicular safety applications intended for VANETs. It can be separated by inter-vehicle communication. It is needed for a vehicle can travel safety with high velocity and must interconnect quickly dependably. In this work, examined the impact of the IDM-IM and IDM-LC mobility model on AODV, AOMDV, DSDV and OLSR routing protocol using Nakagami propagation model and IEEE 802.11p MAC protocol in a particular urban scenario of Dhaka city. The periodic broadcast (PBC) agent is employed to transmit...

  19. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  20. Fired Models of Air-gun Source and Its Application

    Institute of Scientific and Technical Information of China (English)

    Luo Guichun; Ge Hongkui; Wang Baoshan; Hu Ping; Mu Hongwang; Chen Yong

    2008-01-01

    Air-gun is an important active seismic source. With the development of the theory about air-gun array, the technique for air-gun array design becomes mature and is widely used in petroleum exploration and geophysics. In order to adapt it to different research domains,different combination and fired models are needed. At the present time, there are two firedmodels of air-gun source, namely, reinforced initial pulse and reinforced first bubble pulse.The fired time, space between single guns, frequency and resolution of the two models are different. This comparison can supply the basis for its extensive application.

  1. Modelling application for cognitive reliability and error analysis method

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2013-10-01

    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  2. Modelling of a cross flow evaporator for CSP application

    DEFF Research Database (Denmark)

    Sørensen, Kim; Franco, Alessandro; Pelagotti, Leonardo;

    2016-01-01

    ) applications. Heat transfer and pressure drop prediction methods are an important tool for design and modelling of diabatic, two-phase, shell-side flow over a horizontal plain tubes bundle for a vertical up-flow evaporator. With the objective of developing a model for a specific type of cross flow evaporator...... influence on the analysis of the performance of the evaporator, their impact on significant design variables and the effective lifetime of critical components in different operating conditions, simulating the daily start-up procedures of the steam generator is evaluated. The importance of a good calibration...

  3. Delta-sigma modulators modeling, design and applications

    CERN Document Server

    Bourdopoulos, George I; Anastassopoulos, Vassilis; Deliyannis, Theodore L

    2003-01-01

    This important book deals with the modeling and design of higher-order single-stage delta-sigma modulators. It provides an overview of the architectures, the quantizer models, the design techniques and the implementation issues encountered in the study of the delta-sigma modulators. A number of applications are discussed, with emphasis on use in the design of analog-to-digital converters and in frequency synthesis. The book is education- rather than research-oriented, containing numerical examples and unsolved problems. It is aimed at introducing the final-year undergraduate, the graduate stud

  4. Models of Hydrogel Swelling with Applications to Hydration Sensing

    Directory of Open Access Journals (Sweden)

    Kathryn Morton

    2007-09-01

    Full Text Available Hydrogels, polymers and various other composite materials may be used insensing applications in which the swelling or de-swelling of the material in response tosome analyte is converted via a transducer to a measurable signal. In this paper, we analyzemodels used to predict the swelling behavior of hydrogels that may be used in applicationsrelated to hydration monitoring in humans. Preliminary experimental data related toosmolality changes in fluids is presented to compare to the theoretical models. Overall,good experimental agreement with the models is achieved.

  5. Generalized Bogoliubov Polariton Model: An Application to Stock Exchange Market

    Science.gov (United States)

    Thuy Anh, Chu; Anh, Truong Thi Ngoc; Lan, Nguyen Tri; Viet, Nguyen Ai

    2016-06-01

    A generalized Bogoliubov method for investigation non-simple and complex systems was developed. We take two branch polariton Hamiltonian model in second quantization representation and replace the energies of quasi-particles by two distribution functions of research objects. Application to stock exchange market was taken as an example, where the changing the form of return distribution functions from Boltzmann-like to Gaussian-like was studied.

  6. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  7. Redesigning an Infection Control Application to Support an Enterprise Model

    OpenAIRE

    Doherty, Joshua A.; Huang, Christine; Mayfield, Jennie; Dunagan, Wm Claiborne; Bailey, Thomas C.

    2005-01-01

    As the demands on hospital infection control teams increase, it becomes less efficient for them to use paper-based surveillance methods. The existing electronic infection control surveillance system at our largest facility was not designed to support a multi-hospital model. Our goal was to redesign the application using generic, open source technologies, and make it flexible enough to support the infection control surveillance needs of the entire enterprise.

  8. House Price Risk Models for Banking and Insurance Applications

    OpenAIRE

    Katja Hanewald; Michael Sherris

    2011-01-01

    The recent international credit crisis has highlighted the significant exposure that banks and insurers, especially mono-line credit insurers, have to residential house price risk. This paper provides an assessment of risk models for residential property for applications in banking and insurance including pricing, risk management, and portfolio management. Risk factors and heterogeneity of house price returns are assessed at a postcode-level for house prices in the major capital city of Sydne...

  9. Defined Contribution Model: Definition, Theory and an Application for Turkey

    OpenAIRE

    Metin Ercen; Deniz Gokce

    1998-01-01

    Based on a numerical application that employs social and economic parameters of the Turkish economy, this study attempts to demonstrate that the current collapse in the Turkish social security system is not unavoidable. The present social security system in Turkey is based on the defined benefit model of pension provision. On the other hand, recent proposals for reform in the social security system are based on a multipillar system, where one of the alternatives is a defined contribution pens...

  10. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  11. Hidden Markov Models and their Applications in Biological Sequence Analysis

    OpenAIRE

    Yoon, Byung-Jun

    2009-01-01

    Hidden Markov models (HMMs) have been extensively used in biological sequence analysis. In this paper, we give a tutorial review of HMMs and their applications in a variety of problems in molecular biology. We especially focus on three types of HMMs: the profile-HMMs, pair-HMMs, and context-sensitive HMMs. We show how these HMMs can be used to solve various sequence analysis problems, such as pairwise and multiple sequence alignments, gene annotation, classification, similarity search, and ma...

  12. APPLICATION OF LANDUSE CHANGE MODELING FOR PROTECTED AREA MONITORING

    OpenAIRE

    Jaafari, Shirkou; Shabani, Afshin Alizadeh; Danehkar, Afshin; Nazarisamani, Aliakbar

    2014-01-01

    Globally, land use change impacts biodiversity, water and radiation budgets, emission of green house gases, carbon cycling, and livelihoods. The study of LUCC and its dynamics is crucial for environmental management, especially with regard to sustainable agriculture and forestry. Different models, in terms of structure and application, have been used to understand LUCC dynamics. The present study aims to simulate the spatial pattern of land use change in Varjin protected area, Iran. Land cove...

  13. A Basic Business Model for Commercial Application of Identification Tools

    OpenAIRE

    Kittl, Christian; Schalk, Peter; Dorigo Salamon, Nicola; Martellos, Stefano

    2010-01-01

    Within the three-year EU project KeyToNature various identification tools and applications in formal education for teaching biodiversity have been researched and developed. Building on the competencies of the involved partner organisations and the expertise gained in this domain, the paper outlines a business model which aims at commercially exploiting the project results on a broader scale by describing the value proposition, products & services, value architecture, revenue...

  14. Applications of aerosol model in the reactor containment

    OpenAIRE

    Mossad Slama; Mohammad Omar Shaker; Ragaa Aly; Magdy Sirwah

    2014-01-01

    The study simulates of aerosol dynamics including coagulation, deposition and source reinforcement. Typical applications are for nuclear reactor aerosols, aerosol reaction chambers and the production of purified materials. The model determines the aerosol number and volume distributions for an arbitrary number of particle-size classes, called sections. The user specifies the initial aerosol size distribution and the source generation rate of each component in each section. For spatially ho...

  15. Modeling of bubble dynamics in relation to medical applications

    International Nuclear Information System (INIS)

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs

  16. Predictive Modeling of Addiction Lapses in a Mobile Health Application

    OpenAIRE

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M.; Isham, Andrew; Judkins-Fisher, Chris L.; Atwood, Amy K.; Gustafson, David H.

    2013-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-Comprehensive Health Enhancement Support System (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who re...

  17. Numerical modeling in electroporation-based biomedical applications

    OpenAIRE

    Pavšelj, Nataša; Miklavčič, Damijan

    2015-01-01

    Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...

  18. Numerical modeling in electroporation-based biomedical applications:

    OpenAIRE

    Miklavčič, Damijan; Pavšelj, Nataša

    2008-01-01

    Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...

  19. Advance in Application of Regional Climate Models in China

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei; YAN Minhua; CHEN Panqin; XU Helan

    2008-01-01

    Regional climate models have become the powerful tools for simulating regional climate and its changeprocess and have been widely used in China. Using regional climate models, some research results have been obtainedon the following aspects: 1) the numerical simulation of East Asian monsoon climate, including exceptional monsoonprecipitation, summer precipitation distribution, East Asian circulation, multi-year climate average condition, summerrain belt and so on; 2) the simulation of arid climate of the western China, including thermal effect of the Qing-hai-Tibet Plateau, the plateau precipitation in the Qilian Mountains; and the impacts of greenhouse effects (CO2 dou-bling) upon climate in the western China; and 3) the simulation of the climate effect of underlying surface changes, in-cluding the effect of soil on climate formation, the influence of terrain on precipitation, the effect of regional soil deg-radation on regional climate, the effect of various underlying surfaces on regional climate, the effect of land-sea con-trast on the climate formulation, the influence of snow cover over the plateau regions on the regional climate, the effectof vegetation changes on the regional climate, etc. In the process of application of regional climate models, the prefer-ences of the models are improved so that better simulation results are gotten. At last, some suggestions are made aboutthe application of regional climate models in regional climate research in the future.

  20. MULTI-WAVELENGTH MODELLING OF DUSTY GALAXIES. GRASIL AND APPLICATIONS

    Directory of Open Access Journals (Sweden)

    L. Silva

    2009-01-01

    Full Text Available The spectral energy distribution of galaxies contains a convolved information on their stellar and gas content, on the star formation rate and history. It is therefore the most direct probe of galaxy properties. Each spectral range is mostly dominated by some specific emission sources or radiative processes so that only by modelling the whole spectral range it is possible to de-convolve and interpret the information contained in the SED in terms of SFR and galaxy evolution in general. The ingredients and kind of computations considered in models for the SEDs of galaxies depend on their aims. Theoretical models have the advantage of a broader interpretative and predictive power with respect to observationally calibrated semi-empirical approaches, the major drawback being a longer computational time. I summarize the main features of GRASIL, a code to compute the UV to radio SED of galaxies treating the radiative transfer and dust emission with particular care. It has been widely applied to interpret observations and to make predictions for semi-analytical galaxy formation models. I present in particular the applications in the context of galaxy models, and the new method implemented in GRASIL based on the artificial neural network algorithm to cope with the computing time for cosmological applications.

  1. An analytic performance model of disk arrays and its application

    Science.gov (United States)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  2. Dosimetric applications of the new ICRP lung model

    International Nuclear Information System (INIS)

    The International Commission on Radiological Protection (ICRP) has adopted a new dosimetric model of the human respiratory tract, to be issued as ICRP Publication 66. This chapter presents a summary of the main measures of the new model. The model is a general update of that in Publication 30, but is significantly broader in scope. It applies explicitly to workers and all members of the public: for inhalation of particles, gases and vapors; evaluation of dose per unit intake or exposure; and interpretation of bioassay data. The approach is fundamentally different from the Publication 30 model which calculates only the average dose to the lungs. The new model takes account of differences in radiosensitivity of respiratory tract tissues, and the wide range of doses they may receive, and calculates specific tissue doses. The model readily incorporates specific information related to the subject (age, physical activity, smoking or health status) or the exposure (aerosol size and chemical form). The application of the new model to calculate equivalent lung dose and effective dose per unit intake is illustrated for several α- and ∂-emitting radionuclides, and the new values obtained are compared with those given by the ICRP Publication 30 lung model

  3. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  4. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2006-12-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. The semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. Aggregated Nash-Sutcliffe coefficients at different temporal scales are adopted as objective function to estimate the model parameters. The performance of the hydrological model is analyzed as a function of the raingauge density. The calibrated model is validated using the same precipitation used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. The effect of missing rainfall data is investigated by using a multiple linear regression approach for filling the missing values. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need recalibration of the model parameters: model calibrated on sparse information might perform well on dense information while model calibrated on dense information fails on sparse information. Also, the model calibrated with complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as missing measurements, performs well. A meso-scale catchment located in the south-west of Germany has been selected for

  5. Application of data fusion modeling (DFM) to site characterization

    International Nuclear Information System (INIS)

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification of uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points

  6. Numerical modelling of channel migration with application to laboratory rivers

    Institute of Scientific and Technical Information of China (English)

    Jian SUN; Bin-liang LIN; Hong-wei KUANG

    2015-01-01

    The paper presents the development of a morphological model and its application to experimental model rivers. The model takes into account the key processes of channel migration, including bed deformation, bank failure and wetting and drying. Secondary flows in bends play an important role in lateral sediment transport, which further affects channel migration. A new formula has been derived to predict the near-bed secondary flow speed, in which the magnitude of the speed is linked to the lateral water level gradient. Since only non-cohesive sediment is considered in the current study, the bank failure is modelled based on the concept of submerged angle of repose. The wetting and drying process is modelled using an existing method. Comparisons between the numerical model predictions and experimental observations for various discharges have been made. It is found that the model predicted channel planform and cross-sectional shapes agree generally well with the laboratory observations. A scenario analysis is also carried out to investigate the impact of secondary flow on the channel migration process. It shows that if the effect of secondary flow is ignored, the channel size in the lateral direction will be seriously underestimated.

  7. 4Mx Soil-Plant Model: Applications, Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Nándor Fodor

    2012-12-01

    Full Text Available Crop simulation models describe the main processes of the soil-plant system in a dynamic way usually in a daily time-step. With the help of these models we may monitor the soil- and plant-related processes of the simulated system as they evolve according to the atmospheric and environmental conditions. Crop models could be successfully applied in the following areas: (1 Education: by promoting the system-oriented thinking a comprehensive overview of the interrelations of the soil-plant system as well as of the environmental protection related aspects of the human activities could be presented. (2 Research: The results of observations as well as of experiments could be extrapolated in time and space, thus, for example, the possible effects of the global climate change could be estimated. (3 Practice: Model calculations could be used in intelligent irrigation control and decision supporting systems as well as for providing scientific background for policy makers. The most spectacular feature of the 4Mx crop model is that its graphical user interface enables the user to alter not only the parameters of the model but the function types of its governing equations as well. The applicability of the 4Mx model is presented via several case-studies.

  8. WRF Model Methodology for Offshore Wind Energy Applications

    Directory of Open Access Journals (Sweden)

    Evangelia-Maria Giannakopoulou

    2014-01-01

    Full Text Available Among the parameters that must be considered for an offshore wind farm development, the stability conditions of the marine atmospheric boundary layer (MABL are of significant importance. Atmospheric stability is a vital parameter in wind resource assessment (WRA due to its direct relation to wind and turbulence profiles. A better understanding of the stability conditions occurring offshore and of the interaction between MABL and wind turbines is needed. Accurate simulations of the offshore wind and stability conditions using mesoscale modelling techniques can lead to a more precise WRA. However, the use of any mesoscale model for wind energy applications requires a proper validation process to understand the accuracy and limitations of the model. For this validation process, the weather research and forecasting (WRF model has been applied over the North Sea during March 2005. The sensitivity of the WRF model performance to the use of different horizontal resolutions, input datasets, PBL parameterisations, and nesting options was examined. Comparison of the model results with other modelling studies and with high quality observations recorded at the offshore measurement platform FINO1 showed that the ERA-Interim reanalysis data in combination with the 2.5-level MYNN PBL scheme satisfactorily simulate the MABL over the North Sea.

  9. New Trends in Model Coupling Theory, Numerics and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Coquel, F. [CMAP Ecole Polytech, CNRS, UMR 7641, F-91128 Palaiseau (France); Godlewski, E. [UPMC Univ Paris 6, UMR 7598, Lab Jacques Louis Lions, F-75005 Paris (France); Herard, J. M. [EDF RD, F-78400 Chatou (France); Segre, J. [CEA Saclay, DEN, DM2S, F-91191 Gif Sur Yvette (France)

    2010-07-01

    This special issue comprises selected papers from the workshop New Trends in Model Coupling, Theory, Numerics and Applications (NTMC'09) which took place in Paris, September 2 - 4, 2009. The research of optimal technological solutions in a large amount of industrial systems requires to perform numerical simulations of complex phenomena which are often characterized by the coupling of models related to various space and/or time scales. Thus, the so-called multi-scale modelling has been a thriving scientific activity which connects applied mathematics and other disciplines such as physics, chemistry, biology or even social sciences. To illustrate the variety of fields concerned by the natural occurrence of model coupling we may quote: meteorology where it is required to take into account several turbulence scales or the interaction between oceans and atmosphere, but also regional models in a global description, solid mechanics where a thorough understanding of complex phenomena such as propagation of cracks needs to couple various models from the atomistic level to the macroscopic level; plasma physics for fusion energy for instance where dense plasmas and collisionless plasma coexist; multiphase fluid dynamics when several types of flow corresponding to several types of models are present simultaneously in complex circuits; social behaviour analysis with interaction between individual actions and collective behaviour. (authors)

  10. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  11. Memcapacitor model and its application in a chaotic oscillator

    Science.gov (United States)

    Guang-Yi, Wang; Bo-Zhen, Cai; Pei-Pei, Jin; Ti-Ling, Hu

    2016-01-01

    A memcapacitor is a new type of memory capacitor. Before the advent of practical memcapacitor, the prospective studies on its models and potential applications are of importance. For this purpose, we establish a mathematical memcapacitor model and a corresponding circuit model. As a potential application, based on the model, a memcapacitor oscillator is designed, with its basic dynamic characteristics analyzed theoretically and experimentally. Some circuit variables such as charge, flux, and integral of charge, which are difficult to measure, are observed and measured via simulations and experiments. Analysis results show that besides the typical period-doubling bifurcations and period-3 windows, sustained chaos with constant Lyapunov exponents occurs. Moreover, this oscillator also exhibits abrupt chaos and some novel bifurcations. In addition, based on the digital signal processing (DSP) technology, a scheme of digitally realizing this memcapacitor oscillator is provided. Then the statistical properties of the chaotic sequences generated from the oscillator are tested by using the test suit of the National Institute of Standards and Technology (NIST). The tested randomness definitely reaches the standards of NIST, and is better than that of the well-known Lorenz system. Project supported by the National Natural Science Foundation of China (Grant Nos. 61271064, 61401134, and 60971046), the Natural Science Foundation of Zhejiang Province, China (Grant Nos. LZ12F01001 and LQ14F010008), and the Program for Zhejiang Leading Team of S&T Innovation, China (Grant No. 2010R50010).

  12. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  13. Towards Industrial Application of Damage Models for Sheet Metal Forming

    Science.gov (United States)

    Doig, M.; Roll, K.

    2011-05-01

    Due to global warming and financial situation the demand to reduce the CO2-emission and the production costs leads to the permanent development of new materials. In the automotive industry the occupant safety is an additional condition. Bringing these arguments together the preferable approach for lightweight design of car components, especially for body-in-white, is the use of modern steels. Such steel grades, also called advanced high strength steels (AHSS), exhibit a high strength as well as a high formability. Not only their material behavior but also the damage behavior of AHSS is different compared to the performances of standard steels. Conventional methods for the damage prediction in the industry like the forming limit curve (FLC) are not reliable for AHSS. Physically based damage models are often used in crash and bulk forming simulations. The still open question is the industrial application of these models for sheet metal forming. This paper evaluates the Gurson-Tvergaard-Needleman (GTN) model and the model of Lemaitre within commercial codes with a goal of industrial application.

  14. Model-Driven Visual Testing and Debugging of WSN Applications

    Directory of Open Access Journals (Sweden)

    Mohammad Al Saad

    2009-09-01

    Full Text Available We introduce our tool-suite that facilitates automated testing of applications for wireless sensor networks (WSNs. WSNs are highly distributed systems and therefore require a testing infrastructure. We present a general-purpose testing framework which enables component, integration, and system testing. When using our testing framework, test cases have to be implemented by several modules. To coordinate the execution of these modules synchronization code must be written which is a complex and time consuming task. To bypass this task and thus make the test case implementation process more efficient, we introduce a model-driven approach that delegates this task to the code generator. Thereto, the test scenario is represented in a model and the code of the modules is generated. This model also eases the task of isolating faults in the code of the application being tested because the model can be refined to get insights on the application’s behavior to backtrack the cause of the failure reproduced by the test case.

  15. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  16. Photographic-based target models for LADAR applications

    Science.gov (United States)

    Jack, James T.; Delashmit, Walter H.

    2009-05-01

    A long standing need for the application of laser radar (LADAR) to a wider range of targets is a technique for creating a "target model" from target photographs. This is feasible since LADAR images are 3D and photographs at selected azimuth/elevation angles will allow the required models to be created. Preferred photographic images of a wide range of selected targets were specified and collected. These photographs were processed using code developed in house and some commercial software packages. These "models" were used in model-based automatic target recognition (ATR) algorithms. The ATR performance was excellent. This technique differs significantly from other techniques for creating target models. Those techniques require CAD models which are much harder to manipulate and contain extraneous detail. The technique in this paper develops the photographic-based target models in component form so that any component (e.g., turret of a tank) can be independently manipulated, such as rotating the turret. This new technique also allows models to be generated for targets for which no actual LADAR data has ever been collected. A summary of the steps used in the modeling process is as follows: start with a set of input photographs, calibrate the imagery into a 3D world space to generate points corresponding to target features, create target geometry by connecting points with surfaces, mark all co-located points in each image view and verify alignment of points, place in a 3D space, create models by creating surfaces (i.e., connect points with planar curves) and scale target into real-world coordinates.

  17. Current developments in soil organic matter modeling and the expansion of model applications: a review

    International Nuclear Information System (INIS)

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions. (topical review)

  18. Current developments in soil organic matter modeling and the expansion of model applications: a review

    Science.gov (United States)

    Campbell, Eleanor E.; Paustian, Keith

    2015-12-01

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions.

  19. Combat System Modeling:Modeling Large-Scale Software and Hardware Application Using UML

    OpenAIRE

    AL-Aqrabawi, Mohammad Saleh

    2001-01-01

    Maintaining large-scale legacy applications has been a major challenge for software producers. As the application evolves and gets more complicated, it becomes harder to understand, debug, or modify the code. Moreover, as new members are joining the development team, and others are leaving, the need for a well-documented code arises. Good documentation necessitates the visualization of the code in an easy to understand manner. The Unified Modeling Language (UML), an Object Management Group...

  20. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  1. Modeling Real-Time Applications with Reusable Design Patterns

    Science.gov (United States)

    Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik

    Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.

  2. Application of Z-Number Based Modeling in Psychological Research.

    Science.gov (United States)

    Aliev, Rafik; Memmedova, Konul

    2015-01-01

    Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS), Test of Attention (D2 Test), and Spielberger's Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented. PMID:26339231

  3. Clinical application of the five-factor model.

    Science.gov (United States)

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning. PMID:22924994

  4. The Logistic Maturity Model: Application to a Fashion Company

    Directory of Open Access Journals (Sweden)

    Claudia Battista

    2013-08-01

    Full Text Available This paper describes the structure of the logistic maturity model (LMM in detail and shows the possible improvements that can be achieved by using this model in terms of the identification of the most appropriate actions to be taken in order to increase the performance of the logistics processes in industrial companies. The paper also gives an example of the LMM’s application to a famous Italian female fashion firm, which decided to use the model as a guideline for the optimization of its supply chain. Relying on a 5-level maturity staircase, specific achievement indicators as well as key performance indicators and best practices are defined and related to each logistics area/process/sub-process, allowing any user to easily and rapidly understand the more critical logistical issues in terms of process immaturity.

  5. On the application of copula in modeling maintenance contract

    Science.gov (United States)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  6. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  7. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  8. Practical Application of Model Checking in Software Verification

    Science.gov (United States)

    Havelund, Klaus; Skakkebaek, Jens Ulrik

    1999-01-01

    This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.

  9. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  10. Probabilistic modeling of scene dynamics for applications in visual surveillance.

    Science.gov (United States)

    Saleemi, Imran; Shafique, Khurram; Shah, Mubarak

    2009-08-01

    We propose a novel method to model and learn the scene activity, observed by a static camera. The proposed model is very general and can be applied for solution of a variety of problems. The motion patterns of objects in the scene are modeled in the form of a multivariate nonparametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. It encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte Carlo (MCMC)-based framework for generating the most likely paths in the scene, improving foreground detection, persistent labeling of objects during tracking, and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real-world videos are reported which validate the proposed approach. PMID:19542580

  11. Stochastic daily solar irradiance for biological modeling applications

    International Nuclear Information System (INIS)

    Stochastic daily weather generators commonly used for biological modeling applications do not adequately reproduce empirical distributions of global solar irradiance. The daily clearness index, the ratio of daily global-to-extraterrestrial irradiance, captures the stochastic component of solar irradiance due to atmospheric conditions. Three alternative models of daily solar irradiance (truncated Gaussian distributions, a proposed modification based on logit-transformed relative clearness, and a family of empirically derived distributions) conditioned on the occurrence of rain are described and evaluated using data from 10 U.S. locations. These models are presented in terms of monthly cumulative distributions and density functions of clearness. Strong non-normality of distributions of clearness, and improved fits obtained with a logit transformation, are demonstrated. The proposed model, based on a logit transformation of relative clearness, was superior to the other two in terms of Akaike's information criterion, and generally superior to the standard model based on truncated Gaussian distributions in terms of goodness of fit between observed and generated irradiances. Based on this evidence, the proposed model is recommended for stochastic generation of daily irradiance conditioned on daily rainfall occurrence and temperature extremes. (author)

  12. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. PMID:24842053

  13. Distributed-Channel Bipolar Device: Experimentation, Analytical Modeling and Applications.

    Science.gov (United States)

    Jiang, Fenglai

    Experimental results and theoretical modeling for four terminal distributed channel bipolar devices (DCBD) are presented. The DCBD device is comprised of an interwoven BJT and MOSFET. The device may be characterized as a MOSFET with a bipolar transistor source distributed under the MOSFET channel. Alternatively, the device may be represented as a BJT where a MOSFET channel provides the current collection function. The physical layout of the device is that of a n-channel MOSFET placed above a p-Si epitaxial base region which was grown on an n^+-Si substrate emitter. Distributed electronic behavior exhibits itself through self-biasing influences of the channel-collected current on the channel-base junction bias. For appropriate biasing, the MOSFET channel divides itself into two regions exhibiting forward active and saturation BJT behavior. Both experimental results and theoretical modeling are provided. Experimental results for "large area" rectangular gate, circular gate and trapezoidal gate DCBD are reported. The experimental results exhibit the transconductance threshold voltage, beta fall off and transconductance fall-off features reported previously by others. A "large area" trapezoidal gate structure is incorporated to illustrate the gate area influences on the electrical characteristics and to provide a model sensitive structure for evaluating the validity of the theory developed in the dissertation. An analytical model based on conventional MOSFET and bipolar theories is developed. The analytical model is applied to the large gate area devices (example: 0.127 mm rectangular gate length) and smaller dimensional gate devices down to 0.9 micron rectangular gate length. The theoretical results show good agreement with the large gate area experimental results. Application examples are provided. The use of the base current invariant transconductance threshold voltage as a reference voltage is discussed. Comparison of the transconductance threshold voltage

  14. Rectangular amplitudes, conformal blocks, and applications to loop models

    Energy Technology Data Exchange (ETDEWEB)

    Bondesan, Roberto, E-mail: roberto.bondesan@cea.fr [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Jacobsen, Jesper L. [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris (France); Saleur, Hubert [Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Physics Department, USC, Los Angeles, CA 90089-0484 (United States)

    2013-02-21

    In this paper we continue the investigation of partition functions of critical systems on a rectangle initiated in [R. Bondesan, et al., Nucl. Phys. B 862 (2012) 553-575]. Here we develop a general formalism of rectangle boundary states using conformal field theory, adapted to describe geometries supporting different boundary conditions. We discuss the computation of rectangular amplitudes and their modular properties, presenting explicit results for the case of free theories. In a second part of the paper we focus on applications to loop models, discussing in details lattice discretizations using both numerical and analytical calculations. These results allow to interpret geometrically conformal blocks, and as an application we derive new probability formulas for self-avoiding walks.

  15. Stochastic Model Predictive Control with Applications in Smart Energy Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Mølbak, Tommy;

    2012-01-01

    cover more than 50% of the total consumption by 2050. Energy systems based on significant amounts of renewable energy sources are subject to uncertainties. To accommodate the need for model predictive control (MPC) of such systems, the effect of the stochastic effects on the constraints must be...... function). This is convenient for energy systems, since some constraints are very important to satisfy with a high probability, whereas violation of others are less prone to have a large economic penalty. In MPC applications the control action is obtained by solving an optimization problem at each sampling...... instant. To make the controller applicable in real-time efficient and reliable algorithms are required. If the uncertainty is assumed to be Gaussian, the optimization problems associated with chance constrained (linear) MPC can be expressed as second order cone programming (SOCP) problems. In this paper...

  16. The determination of the most applicable PWV model for Turkey

    Science.gov (United States)

    Deniz, Ilke; Gurbuz, Gokhan; Mekik, Cetin

    2016-07-01

    Water vapor is a key component for modelling atmosphere and climate studies. Moreover, long-term water vapor changes can be an independent source for detecting climate changes. Since Global Navigation Satellite Systems (GNSS) use microwaves passing through the atmosphere, atmospheric effects are modeled with high accuracy. Tropospheric effects on GNSS signals are estimated with total zenith delay parameter (ZTD) which is the sum of hydrostatic (ZHD) and wet zenith delay (ZWD). The first component can be obtained from meteorological observations with high accuracy; the second component, however, can be computed by subtracting ZHD from ZTD (ZWD=ZTD-ZHD). Afterwards, the weighted mean temperature (Tm) or the conversion factor (Q) is used for the conversion between the precipitable water vapor (PWV) and ZWD. The parameters Tm and Q are derived from the analysis of radiosonde stations' profile observations. Numerous Q and Tm models have been developed for each radiosonde station, radiosonde station group, countries and global fields such as Bevis Tm model and Emardson and Derks' Q models. So, PWV models (Tm and Q models) applied for Turkey have been developed using a year of radiosonde data (2011) from 8 radiosonde stations. In this study the models developed are tested by comparing PWVGNSS computed applying Tm and Q models to the ZTD estimates derived by Bernese and GAMIT/GLOBK software at GNSS stations established at Istanbul and Ankara with those from the collocated radiosonde stations (PWVRS) from October 2013 to December 2014 with the data obtained from a project (no 112Y350) supported by the Scientific and Technological Research Council of Turkey (TUBITAK). The comparison results show that PWVGNSS and PWVRS are in high correlation (86 % for Ankara and 90% for Istanbul). Thus, the most applicable model for Turkey and the accuracy of GNSS meteorology are investigated. In addition, Tm model was applied to the ZTD estimates of 20 TUSAGA-Active (CORS-TR) stations in

  17. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  18. Statistical mechanical modeling: Computer simulations, analysis and applications

    Science.gov (United States)

    Subramanian, Balakrishna

    This thesis describes the applications of statistical mechanical models and tools, especially computational techniques to the study of several problems in science. We study in chapter 2, various properties of a non-equilibrium cellular automaton model, the Toom model. We obtain numerically the exponents describing the fluctuations of the interface between the two stable phases of the model. In chapter 3, we introduce a binary alloy model with three-body potentials. Unlike the usual Ising-type models with two-body interactions, this model is not symmetric in its components. We calculate the exact low temperature phase diagram using Pirogov-Sinai theory and also find the mean-field equilibrium properties of this model. We then study the kinetics of phase segregation following a quenching in this model. We find that the results are very similar to those obtained for Ising-type models with pair interactions, indicating universality. In chapter 4, we discuss the statistical properties of "Contact Maps". These maps, are used to represent three-dimensional structures of proteins in modeling problems. We find that this representation space has particular properties that make it a convenient choice. The maps representing native folds of proteins correspond to compact structures which in turn correspond to maps with low degeneracy, making it easier to translate the map into the detailed 3-dimensional structure. The early stage of formation of a river network is described in Chapter 5 using quasi-random spanning trees on a square lattice. We observe that the statistical properties generated by these models are quite similar (better than some of the earlier models) to the empirical laws and results presented by geologists for real river networks. Finally, in chapter 6 we present a brief note on our study of the problem of progression of heterogeneous breast tumors. We investigate some of the possible pathways of progression based on the traditional notions of DCIS (Ductal

  19. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  20. Applicability of dual-route reading models to Spanish.

    Science.gov (United States)

    Ardila, Alfredo; Cuetos, Fernando

    2016-02-01

    Two opposing points of view have been presented with regard to the applicability of the dual-route reading models  Spanish. Some authors maintain that, given the transparency of the reading system, non-lexical reading is the strategy followed predominantly by Spanish readers and for that reason these models are not appropriate to explain alexias (acquired dyslexias) in Spanish. Other authors, consider that since several cases of phonological, surface and deep alexia have been reported, dual-route reading models are applicable to Spanish in the same way that to the irregular writing systems. In order to contrast these two points of view, an analysis of the two main factors that influence the reading is made: characteristics of the Spanish orthography and characteristics of the Spanish readers. It is conclude that, (1) Due to its transparency, non-lexical reading represents –as in other transparent orthographies-- the initial reading strategy in Spanish; (2) the “reading threshold” (i.e., time required to become literate) is lower in Spanish because there are no irregular words to learn; (3) as reading experience increases, speed increases and lexical reading becomes used more; (4) Given the characteristics of the Spanish reading system, it is understandable that frequency of deep dyslexia is so low. PMID:26820427

  1. Bilayer Graphene Application on NO2 Sensor Modelling

    Directory of Open Access Journals (Sweden)

    Elnaz Akbari

    2014-01-01

    Full Text Available Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using graphene to achieve accuracy and higher speed in detection devices utilized in cases where gas sensing is required. Although there are plenty of experimental studies in this field, the lack of analytical models is felt deeply. To start with modelling, the field effect transistor- (FET- based structure has been chosen to serve as the platform and bilayer graphene density of state variation effect by NO2 injection has been discussed. The chemical reaction between graphene and gas creates new carriers in graphene which cause density changes and eventually cause changes in the carrier velocity. In the presence of NO2 gas, electrons are donated to the FET channel which is employed as a sensing mechanism. In order to evaluate the accuracy of the proposed models, the results obtained are compared with the existing experimental data and acceptable agreement is reported.

  2. Development and application of the integrated SWAT MODFLOW model

    Science.gov (United States)

    Kim, Nam Won; Chung, Il Moon; Won, Yoo Seung; Arnold, Jeffrey G.

    2008-07-01

    SummaryThis paper suggests a new approach for integrating the quasi-distributed watershed model, SWAT, with the fully-distributed ground-water model, MODFLOW. Since the SWAT model has semi-distributed features, its groundwater component does not consider distributed parameters such as hydraulic conductivity and storage coefficient. In generating a detailed representation of groundwater recharge, it is equally difficult to calculate the head distribution and the distributed pumping rate. In order to solve this problem a method is proposed whereby the characteristics of the hydrologic response units (HRUs) in the SWAT model are exchanged with cells in the MODFLOW model. By using this HRU-cell conversion interface, the distributed groundwater recharge rate and the groundwater evapotranspiration can be effectively simulated. By considering the interaction between the stream network and the aquifer to reflect boundary flow, the linkage is completed. For this purpose, the RIVER package in the MODFLOW model is used for river-aquifer interaction. This combined modeling is applied to the Musimcheon Basin in Korea. The application demonstrates that an integrated SWAT-MODFLOW is capable of simulating a spatio-temporal distribution of groundwater recharge rates, aquifer evapotranspiration and groundwater levels. It also enables an interaction between the saturated aquifer and channel reaches. This interaction played an important role in the generation of groundwater discharge in the basin, especially during the low flow period. The advanced water transfer method in SWAT-MODFLOW was successfully tested, and reproduced the distributed drawdown and reduced stream flow by pumping with multiple wells. Therefore, when considering discharge to streams, springs or marshes, the use of this model would be beneficial in planning for the sustainable development of groundwater.

  3. A review of surrogate models and their application to groundwater modeling

    Science.gov (United States)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  4. Modular coupling of transport and chemistry: theory and model applications

    International Nuclear Information System (INIS)

    For the description of complex processes in the near-field of a radioactive waste repository, the coupling of transport and chemistry is necessary. A reason for the relatively minor use of coupled codes in this area is the high amount of computer time and storage capacity necessary for calculations by conventional codes, and lack of available data. The simple application of the sequentially coupled code MCOTAC, which couples one-dimensional advective, dispersive and diffusive transport with chemical equilibrium complexation and precipitation/dissolution reactions in a porous medium, shows some promising features with respect to applicability to relevant problems. Transport, described by a random walk of multi-species particles, and chemical equilibrium calculations are solved separately, coupled only by an exchange term to ensure mass conservation. The modular-structured code was applied to three problems: a) incongruent dissolution of hydrated silicate gels, b) dissolution of portlandite and c) calcite dissolution and hypothetical dolomite precipitation. This allows for a comparison with other codes and their applications. The incongruent dissolution of cement phases, important for degradation of cementitious materials in a repository, can be included in the model without the problems which occur with a directly coupled code. The handling of a sharp multi-mineral front system showed a much faster calculation time compared to a directly coupled code application. Altogether, the results are in good agreement with other code calculations. Hence, the chosen modular concept of MCOTAC is more open to an easy extension of the code to include additional processes like sorption, kinetically controlled processes, transport in two or three spatial dimensions, and adaptation to new developments in computing (hardware and software), an important factor for applicability. (author) figs., tabs., refs

  5. Birth-death branching models. Application to African elephant populations.

    Science.gov (United States)

    Corbacho, Casimiro; Molina, Manuel; Mota, Manuel; Ramos, Alfonso

    2013-09-01

    Branching models have a long history of biological applications, particularly in population dynamics. In this work, our interest is the development of mathematical models to describe the demographic dynamics of socially structured animal populations, focusing our attention on lineages, usually matrilines, as the basic structure in the population. Significant efforts have been made to develop models based on the assumption that all individuals behave identically with respect to reproduction. However, the reproduction phase has a large random component that involves not only demographic but also environmental factors that change across range distribution of species. In the present work, we introduce new classes of birth-death branching models which take such factors into account. We assume that both, the offspring probability distribution and the death probabilities may be different in each generation, changing either predictably or unpredictably in relation to habitat features. We consider the genealogical tree generated by observation of the process until a pre-set generation. We determine the probability distributions of the random variables representing the number of dead or living individuals having at least one ancestor alive, living individuals whose ancestors are all dead, and dead individuals whose ancestors are all dead, explicitly obtaining their principal moments. Also, we derive the probability distributions corresponding to the partial and total numbers of such biological variables, obtaining in particular the distribution of the total number of matriarchs in the genealogical tree. We apply the proposed models to describe the demographic dynamics of African elephant populations living in different habitats. PMID:23648183

  6. Numerical algorithm of distributed TOPKAPI model and its application

    Directory of Open Access Journals (Sweden)

    Deng Peng

    2008-12-01

    Full Text Available The TOPKAPI (TOPographic Kinematic APproximation and Integration model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model’s computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  7. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  8. Cognitive interference modeling with applications in power and admission control

    KAUST Repository

    Mahmood, Nurul Huda

    2012-10-01

    One of the key design challenges in a cognitive radio network is controlling the interference generated at coexisting primary receivers. In order to design efficient cognitive radio systems and to minimize their unwanted consequences, it is therefore necessary to effectively control the secondary interference at the primary receivers. In this paper, a generalized framework for the interference analysis of a cognitive radio network where the different secondary transmitters may transmit with different powers and transmission probabilities, is presented and various applications of this interference model are demonstrated. The findings of the analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2012 IEEE.

  9. APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES

    OpenAIRE

    Mikhail A. Goncharov

    2015-01-01

    The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarde...

  10. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  11. On the applicability of models for outdoor sound

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    not only sound pressure levels but also phase information. Such methods are, however, not always able to predict the sound field for more complicated scenarios involving terrain features, atmospheric wind and temperature gradients and turbulence. Another class of methods is based upon approximate theory......The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as Parabolic Equation methods, FFP methods and Boundary Element Methods all have advantages in certain situations. These approaches are accurate and predict...

  12. On the applicability of models for outdoor sound (A)

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    not only sound pressure levels but also phase information. Such methods are, however, not always able to predict the sound field for more complicated scenarios involving terrain features, atmospheric wind and temperature gradients, and turbulence. Another class of methods is based upon approximate theory......The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as parabolic equation methods, FFP methods, and boundary element methods all have advantages in certain situations. These approaches are accurate and predict...

  13. Structure model of energy efficiency indicators and applications

    International Nuclear Information System (INIS)

    For the purposes of energy conservation and environmental protection, the government of Taiwan has instigated long-term policies to continuously encourage and assist industry in improving the efficiency of energy utilization. While multiple actions have led to practical energy saving to a limited extent, no strong evidence of improvement in energy efficiency was observed from the energy efficiency indicators (EEI) system, according to the annual national energy statistics and survey. A structural analysis of EEI is needed in order to understand the role that energy efficiency plays in the EEI system. This work uses the Taylor series expansion to develop a structure model for EEI at the level of the process sector of industry. The model is developed on the premise that the design parameters of the process are used in comparison with the operational parameters for energy differences. The utilization index of production capability and the variation index of energy utilization are formulated in the model to describe the differences between EEIs. Both qualitative and quantitative methods for the analysis of energy efficiency and energy savings are derived from the model. Through structural analysis, the model showed that, while the performance of EEI is proportional to the process utilization index of production capability, it is possible that energy may develop in a direction opposite to that of EEI. This helps to explain, at least in part, the inconsistency between EEI and energy savings. An energy-intensive steel plant in Taiwan was selected to show the application of the model. The energy utilization efficiency of the plant was evaluated and the amount of energy that had been saved or over-used in the production process was estimated. Some insights gained from the model outcomes are helpful to further enhance energy efficiency in the plant

  14. Chemical kinetic modeling of H{sub 2} applications

    Energy Technology Data Exchange (ETDEWEB)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-09-01

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamic computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.

  15. Application of Service Quality Model in Education Environment

    Directory of Open Access Journals (Sweden)

    Ting Ding Hooi

    2016-02-01

    Full Text Available Most of the ideas on service quality stem from the West. The massive developments in research in the West are undeniable of their importance. This leads to the generation and development of new ideas. These ideas were subsequently channeled to developing countries. Ideas obtained were then formulated and used by these developing countries in order to obtain better approach in channeling service quality. There are ample to be learnt from the service quality model, SERVQUAL which attain high acceptance in the West. Service quality in the education system is important to guarantee the effectiveness and quality of education. Effective and quality education will be able to offer quality graduates, which will contribute to the development of the nation. This paper will discuss the application of the SERVQUAL model into the education environment.

  16. Application of the GRC Stirling Convertor System Dynamic Model

    Science.gov (United States)

    Regan, Timothy F.; Lewandowski, Edward J.; Schreiber, Jeffrey G. (Technical Monitor)

    2004-01-01

    The GRC Stirling Convertor System Dynamic Model (SDM) has been developed to simulate dynamic performance of power systems incorporating free-piston Stirling convertors. This paper discusses its use in evaluating system dynamics and other systems concerns. Detailed examples are provided showing the use of the model in evaluation of off-nominal operating conditions. The many degrees of freedom in both the mechanical and electrical domains inherent in the Stirling convertor and the nonlinear dynamics make simulation an attractive analysis tool in conjunction with classical analysis. Application of SDM in studying the relationship of the size of the resonant circuit quality factor (commonly referred to as Q) in the various resonant mechanical and electrical sub-systems is discussed.

  17. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  18. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  19. Land Surface Modeling Applications for Famine Early Warning

    Science.gov (United States)

    McNally, A.; Verdin, J. P.; Peters-Lidard, C. D.; Arsenault, K. R.; Wang, S.; Kumar, S.; Shukla, S.; Funk, C. C.; Pervez, M. S.; Fall, G. M.; Karsten, L. R.

    2015-12-01

    AGU 2015 Fall Meeting Session ID#: 7598 Remote Sensing Applications for Water Resources Management Land Surface Modeling Applications for Famine Early Warning James Verdin, USGS EROS Christa Peters-Lidard, NASA GSFC Amy McNally, NASA GSFC, UMD/ESSIC Kristi Arsenault, NASA GSFC, SAIC Shugong Wang, NASA GSFC, SAIC Sujay Kumar, NASA GSFC, SAIC Shrad Shukla, UCSB Chris Funk, USGS EROS Greg Fall, NOAA Logan Karsten, NOAA, UCAR Famine early warning has traditionally required close monitoring of agro-climatological conditions, putting them in historical context, and projecting them forward to anticipate end-of-season outcomes. In recent years, it has become necessary to factor in the effects of a changing climate as well. There has also been a growing appreciation of the linkage between food security and water availability. In 2009, Famine Early Warning Systems Network (FEWS NET) science partners began developing land surface modeling (LSM) applications to address these needs. With support from the NASA Applied Sciences Program, an instance of the Land Information System (LIS) was developed to specifically support FEWS NET. A simple crop water balance model (GeoWRSI) traditionally used by FEWS NET took its place alongside the Noah land surface model and the latest version of the Variable Infiltration Capacity (VIC) model, and LIS data readers were developed for FEWS NET precipitation forcings (NOAA's RFE and USGS/UCSB's CHIRPS). The resulting system was successfully used to monitor and project soil moisture conditions in the Horn of Africa, foretelling poor crop outcomes in the OND 2013 and MAM 2014 seasons. In parallel, NOAA created another instance of LIS to monitor snow water resources in Afghanistan, which are an early indicator of water availability for irrigation and crop production. These successes have been followed by investment in LSM implementations to track and project water availability in Sub-Saharan Africa and Yemen, work that is now underway. Adoption of

  20. Surrogate Model for Recirculation Phase LBLOCA and DET Application

    International Nuclear Information System (INIS)

    In the nuclear safety field, response surfaces were used in the first demonstration of the code scaling, applicability, and uncertainty (CSAU) methodology to quantify the uncertainty of the peak clad temperature (PCT) during a large-break loss-of-coolant accident (LBLOCA). Surrogates could have applications in other nuclear safety areas such as dynamic probabilistic safety assessment (PSA). Dynamic PSA attempts to couple the probabilistic nature of failure events, component transitions, and human reliability to deterministic calculations of time-dependent nuclear power plant (NPP) responses usually through the use of thermal-hydraulic (TH) system codes. The overall mathematical complexity of the dynamic PSA architectures with many embedded computational expensive TH code calculations with large input/output data streams have limited realistic studies of NPPs. This paper presents a time-dependent surrogate model for the recirculation phase of a hot leg LBLOCA in the OPR-1000. The surrogate model is developed through the ACE algorithm, a powerful nonparametric regression technique, trained on RELAP5 simulations of the LBLOCA. Benchmarking of the surrogate is presented and an application to a simplified dynamic event tree (DET). A time-dependent surrogate model to predict core subcooling during the recirculation phase of a hot leg LBLOCA in the OPR-1000 has been developed. The surrogate assumed the structure of a general discrete time dynamic model and learned the nonlinear functional form by performing nonparametric regression on RELAP5 simulations with the ACE algorithm. The surrogate model input parameters represent mass and energy flux terms to the RCS that appeared as user supplied or code calculated boundary conditions in the RELAP5 model. The surrogate accurately predicted the TH behavior of the core for a variety of HPSI system performance and containment conditions when compared with RELAP5 simulations. The surrogate was applied in a DET application replacing

  1. Surrogate Model for Recirculation Phase LBLOCA and DET Application

    Energy Technology Data Exchange (ETDEWEB)

    Fynan, Douglas A; Ahn, Kwang-Il [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, John C. [Univ. of Michigan, Michigan (United States)

    2014-10-15

    In the nuclear safety field, response surfaces were used in the first demonstration of the code scaling, applicability, and uncertainty (CSAU) methodology to quantify the uncertainty of the peak clad temperature (PCT) during a large-break loss-of-coolant accident (LBLOCA). Surrogates could have applications in other nuclear safety areas such as dynamic probabilistic safety assessment (PSA). Dynamic PSA attempts to couple the probabilistic nature of failure events, component transitions, and human reliability to deterministic calculations of time-dependent nuclear power plant (NPP) responses usually through the use of thermal-hydraulic (TH) system codes. The overall mathematical complexity of the dynamic PSA architectures with many embedded computational expensive TH code calculations with large input/output data streams have limited realistic studies of NPPs. This paper presents a time-dependent surrogate model for the recirculation phase of a hot leg LBLOCA in the OPR-1000. The surrogate model is developed through the ACE algorithm, a powerful nonparametric regression technique, trained on RELAP5 simulations of the LBLOCA. Benchmarking of the surrogate is presented and an application to a simplified dynamic event tree (DET). A time-dependent surrogate model to predict core subcooling during the recirculation phase of a hot leg LBLOCA in the OPR-1000 has been developed. The surrogate assumed the structure of a general discrete time dynamic model and learned the nonlinear functional form by performing nonparametric regression on RELAP5 simulations with the ACE algorithm. The surrogate model input parameters represent mass and energy flux terms to the RCS that appeared as user supplied or code calculated boundary conditions in the RELAP5 model. The surrogate accurately predicted the TH behavior of the core for a variety of HPSI system performance and containment conditions when compared with RELAP5 simulations. The surrogate was applied in a DET application replacing

  2. Acoustic Propagation Modeling for Marine Hydro-Kinetic Applications

    Science.gov (United States)

    Johnson, C. N.; Johnson, E.

    2014-12-01

    The combination of riverine, tidal, and wave energy have the potential to supply over one third of the United States' annual electricity demand. However, in order to deploy and test prototypes, and commercial installations, marine hydrokinetic (MHK) devices must meet strict regulatory guidelines that determine the maximum amount of noise that can be generated and sets particular thresholds for determining disturbance and injury caused by noise. An accurate model for predicting the propagation of a MHK source in a real-life hydro-acoustic environment has been established. This model will help promote the growth and viability of marine, water, and hydrokinetic energy by confidently assuring federal regulations are meet and harmful impacts to marine fish and wildlife are minimal. Paracousti, a finite difference solution to the acoustic equations, was originally developed for sound propagation in atmospheric environments and has been successfully validated for a number of different geophysical activities. The three-dimensional numerical implementation is advantageous over other acoustic propagation techniques for a MHK application where the domains of interest have complex 3D interactions from the seabed, banks, and other shallow water effects. A number of different cases for hydro-acoustic environments have been validated by both analytical and numerical results from canonical and benchmark problems. This includes a variety of hydrodynamic and physical environments that may be present in a potential MHK application including shallow and deep water, sloping, and canyon type bottoms, with varying sound speed and density profiles. With the model successfully validated for hydro-acoustic environments more complex and realistic MHK sources from turbines and/or arrays can be modeled.

  3. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    Directory of Open Access Journals (Sweden)

    Takao Shimayoshi

    Full Text Available Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics.

  4. Brookhaven Regional Energy Facility Siting Model (REFS): model development and application

    Energy Technology Data Exchange (ETDEWEB)

    Meier, P.; Hobbs, B.; Ketcham, G.; McCoy, M.; Stern, R.

    1979-06-01

    A siting methodology developed specifically to bridge the gap between regional-energy-system scenarios and environmental transport models is documented. Development of the model is described in Chapter 1. Chapter 2 described the basic structure of such a model. Additional chapters on model development cover: generation, transmission, demand disaggregation, the interface to other models, computational aspects, the coal sector, water resource considerations, and air quality considerations. These subjects comprise Part I. Part II, Model Applications, covers: analysis of water resource constraints, water resource issues in the New York Power Pool, water resource issues in the New England Power Pool, water resource issues in the Pennsylvania-Jersey-Maryland Power Pool, and a summary of water resource constraint analysis. (MCW)

  5. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  6. Hamiltonian realization of power system dynamic models and its applications

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Power system is a typical energy system. Because Hamiltonian approaches are closely related to the energy of the physical system, they have been widely re-searched in recent years. The realization of the Hamiltonian structure of the nonlinear dynamic system is the basis for the application of the Hamiltonian methods. However, there have been no systematically investigations on the Ham-iltonian realization for different power system dynamic models so far. This paper researches the Hamiltonian realization in power systems dynamics. Starting from the widely used power system dynamic models, the paper reveals the intrinsic Hamiltonian structure of the nonlinear power system dynamics and also proposes approaches to formulate the power system Hamiltonian structure. Furthermore, this paper shows the application of the Hamiltonian structure of the power system dynamics to design non-smooth controller considering the nonlinear ceiling effects from the real physical limits. The general procedure to design controllers via the Hamiltonian structure is also summarized in the paper. The controller design based on the Hamiltonian structure is a completely nonlinear method and there is no lin-earization during the controller design process. Thus, the nonlinear characteristics of the dynamic system are completely kept and fully utilized.

  7. The GCE SYGMA (Stellar Yields for Galaxy Modeling Applications) module

    International Nuclear Information System (INIS)

    The Stellar Yields for Galactic Modelling Applications (SYGMA) module combines the NuGrid yields and other stellar feedback in a single Python, Fortran or web accessible framework. The module provides the time evolution of the abundances of all the chemical elements of 'star particles' that represent single stellar populations (SSPs). It delivers the AGB, SN Ia and massive star contributions of material returned by the SSP after a star-formation burst. Various (including user-supplied) options for standard parameters of chemical evolutions, such as IMF, SN Ia delay-time distribution and SFR are available. An example application of the module would be to model the baryonic feedback of cosmological structure formation simulations. The module can also be used to describe galactic chemical evolution in the simple single-box approximation. Furthermore, we offer a light version of SYGMA as a vehicle to explore the large NuGrid datasets with an online interface. This allows the community to visualize and perform calculations with sets of data in a interactive python environment. (author)

  8. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  9. Real-time application of the drag based model

    Science.gov (United States)

    Žic, Tomislav; Temmer, Manuela; Vršnak, Bojan

    2016-04-01

    The drag-based model (DBM) is an analytical model which is usually used for calculating kinematics of coronal mass ejections (CMEs) in the interplanetary space, prediction of the CME arrival times and impact speeds at arbitrary targets in the heliosphere. The main assumption of the model is that beyond a distance of about 20 solar radii from the Sun, the drag is dominant in the interplanetary space. The previous version of DBM relied on the rough assumption of averaged, unperturbed and constant environmental conditions as well as constant CME properties throughout the entire interplanetary CME propagation. The continuation of our work consists of enhancing the model into a form which uses a time dependent and perturbed environment without constraints on CME properties and distance forecasting. The extension provides the possibility of application in various scenarios, such as automatic least-square fitting on initial CME kinematic data suitable for a real-time forecasting of CME kinematics, or embedding the DBM into pre-calculated interplanetary ambient conditions provided by advanced numerical simulations (for example, codes of ENLIL, EUHFORIA, etc.). A demonstration of the enhanced DBM is available on the web-site: http://www.geof.unizg.hr/~tzic/dbm.html. We acknowledge the support of European Social Fund under the "PoKRet" project.

  10. Global Modeling of CO2 Discharges with Aerospace Applications

    Directory of Open Access Journals (Sweden)

    Chloe Berenguer

    2014-01-01

    Full Text Available We developed a global model aiming to study discharges in CO2 under various conditions, pertaining to a large spectrum of pressure, absorbed energy, and feeding values. Various physical conditions and form factors have been investigated. The model was applied to a case of radiofrequency discharge and to helicon type devices functioning in low and high feed conditions. In general, main charged species were found to be CO2+ for sufficiently low pressure cases and O− for higher pressure ones, followed by CO2+, CO+, and O2+ in the latter case. Dominant reaction is dissociation of CO2 resulting into CO production. Electronegativity, important for radiofrequency discharges, increases with pressure, arriving up to 3 for high flow rates for absorbed power of 250 W, and diminishes with increasing absorbed power. Model results pertaining to radiofrequency type plasma discharges are found in satisfactory agreement with those available from an existing experiment. Application to low and high flow rates feedings cases of helicon thruster allowed for evaluation of thruster functioning conditions pertaining to absorbed powers from 50 W to 1.8 kW. The model allows for a detailed evaluation of the CO2 potential to be used as propellant in electric propulsion devices.

  11. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    Ali Aytek; M Asce; Murat Alp

    2008-04-01

    This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the evolutionary computation (EC). Two different ANN techniques, the feed forward back propagation (FFBP) and generalized regression neural network (GRNN) methods are compared with one EC method, Gene Expression Programming (GEP) which is a new evolutionary algorithm that evolves computer programs. The daily hydrometeorological data of three rainfall stations and one streamflow station for Juniata River Basin in Pennsylvania state of USA are taken into consideration in the model development. Statistical parameters such as average, standard deviation, coefficient of variation, skewness, minimum and maximum values, as well as criteria such as mean square error (MSE) and determination coefficient (2) are used to measure the performance of the models. The results indicate that the proposed genetic programming (GP) formulation performs quite well compared to results obtained by ANNs and is quite practical for use. It is concluded from the results that GEP can be proposed as an alternative to ANN models.

  12. Modeling multibody systems with uncertainties. Part II: Numerical applications

    International Nuclear Information System (INIS)

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties

  13. Applicative limitations of sediment transport on predictive modeling in geomorphology

    Institute of Scientific and Technical Information of China (English)

    WEIXiang; LIZhanbin

    2004-01-01

    Sources of uncertainty or error that arise in attempting to scale up the results of laboratory-scale sediment transport studies for predictive modeling of geomorphic systems include: (i) model imperfection, (ii) omission of important processes, (iii) lack of knowledge of initial conditions, (iv) sensitivity to initial conditions, (v) unresolved heterogeneity, (vi) occurrence of external forcing, and (vii) inapplicability of the factor of safety concept. Sources of uncertainty that are unimportant or that can be controlled at small scales and over short time scales become important in large-scale applications and over long time scales. Control and repeatability, hallmarks of laboratory-scale experiments, are usually lacking at the large scales characteristic of geomorphology. Heterogeneity is an important concomitant of size, and tends to make large systems unique. Uniqueness implies that prediction cannot be based upon first-principles quantitative modeling alone, but must be a function of system history as well. Periodic data collection, feedback, and model updating are essential where site-specific prediction is required.

  14. Conceptual study of the application software manager using the Xlet model in the nuclear fields

    International Nuclear Information System (INIS)

    In order to reduce the cost of software maintenance including software modification, we suggest the object oriented program with checking the version of application program using the Java language and the technique of executing the downloaded application program via network using the application manager. In order to change the traditional scheduler to the application manager we have adopted the Xlet concept in the nuclear fields using the network. In usual Xlet means a Java application that runs on the digital television receiver. The Java TV Application Program Interface(API) defines an application model called the Xlet application lifecycle. Java applications that use this lifecycle model are called Xlets. The Xlet application lifecycle is compatible with the existing application environment and virtual machine technology. The Xlet application lifecycle model defines the dialog (protocol) between an Xlet and its environment

  15. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2016-02-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  16. A priori discretization quality metrics for distributed hydrologic modeling applications

    Science.gov (United States)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  17. Soil erosion by water - model concepts and application

    Science.gov (United States)

    Schmidt, Juergen

    2010-05-01

    approaches will be discussed taking account of the models WEPP, EUROSEM, IISEM and EROSION 3D. In order to provide a better representation of spatially heterogeneous catchments in terms of landuse, soil, slope, and rainfall most of recently developed models operate on a grid-cell basis or other kinds of sub-units, each having uniform characteristics. These so-called "Distributed Models" accepts inputs from raster based geographic information system (GIS). The cell-based structure of the models also allows to generate drainage paths by which water and sediment can be routed from the top to the bottom of the respective watershed. One of the open problems in soil erosion modelling refers to the spontaneous generation of erosion rills without the need for pre-existing morphological contours. A promising approach to handle this problem was realized first in the RILLGROW model, which uses a cellular automaton system in order to generate realistic rill patterns. With respect to the above mentioned models selected applications will be presented and discussed regarding their usability for soil and water conservation purposes.

  18. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP. 

  19. GSTARS computer models and their applications, part I: theoretical development

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two-dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3.

  20. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  1. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  2. A review of modeling applications using ROMS model and COAWST system in the Adriatic sea region

    CERN Document Server

    Carniel, Sandro

    2013-01-01

    From the first implementation in its purely hydrodynamic configuration, to the last configuration under the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) system, several specific modelling applications of the Regional Ocean Modelling Systems (ROMS, www.myroms.org) have been put forward within the Adriatic Sea (Italy) region. Covering now a wide range of spatial and temporal scales, they developed in a growing number of fields supporting Integrated Coastal Zone Management (ICZM) and Marine Spatial Planning (MSP) activities in this semi-enclosed sea of paramount importance including the Gulf of Venice. Presently, a ROMS operational implementation provides every day hydrodynamic and sea level 3-days forecasts, while a second one models the most relevant biogeochemical properties, and a third one (two-way coupled with the Simulating Waves Nearshore (SWAN) model) deals with extreme waves forecast. Such operational models provide support to civil and environmental protection activities (e.g., driving su...

  3. Modeling of wildlife-associated zoonoses: applications and caveats.

    Science.gov (United States)

    Alexander, Kathleen A; Lewis, Bryan L; Marathe, Madhav; Eubank, Stephen; Blackburn, Jason K

    2012-12-01

    Wildlife species are identified as an important source of emerging zoonotic disease. Accordingly, public health programs have attempted to expand in scope to include a greater focus on wildlife and its role in zoonotic disease outbreaks. Zoonotic disease transmission dynamics involving wildlife are complex and nonlinear, presenting a number of challenges. First, empirical characterization of wildlife host species and pathogen systems are often lacking, and insight into one system may have little application to another involving the same host species and pathogen. Pathogen transmission characterization is difficult due to the changing nature of population size and density associated with wildlife hosts. Infectious disease itself may influence wildlife population demographics through compensatory responses that may evolve, such as decreased age to reproduction. Furthermore, wildlife reservoir dynamics can be complex, involving various host species and populations that may vary in their contribution to pathogen transmission and persistence over space and time. Mathematical models can provide an important tool to engage these complex systems, and there is an urgent need for increased computational focus on the coupled dynamics that underlie pathogen spillover at the human-wildlife interface. Often, however, scientists conducting empirical studies on emerging zoonotic disease do not have the necessary skill base to choose, develop, and apply models to evaluate these complex systems. How do modeling frameworks differ and what considerations are important when applying modeling tools to the study of zoonotic disease? Using zoonotic disease examples, we provide an overview of several common approaches and general considerations important in the modeling of wildlife-associated zoonoses. PMID:23199265

  4. Testing simulation and structural models with applications to energy demand

    Science.gov (United States)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality

  5. Inverse Problems in Complex Models and Applications to Earth Sciences

    Science.gov (United States)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  6. Numerical modeling of magnetic moments for UXO applications

    Science.gov (United States)

    Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.

    2006-01-01

    The surface magnetic anomaly observed in UXO clearance is mainly dipolar and, consequently, the dipole is the only magnetic moment regularly recovered in UXO applications. The dipole moment contains information about intensity of magnetization but lacks information about shape. In contrast, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present a 3D numerical modeling study for highly susceptible metallic objects. The basis for the modeling is the solution of a nonlinear integral equation describing magnetization within isolated objects. A solution for magnetization distribution then allows us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at depths typical of UXO burial, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination.

  7. Application of modeling to local chemistry in PWR steam generators

    International Nuclear Information System (INIS)

    Localized corrosion of the SG tubes and other components is due to the presence of an aggressive environment in local crevices and occluded regions. In crevices and on vertical and horizontal tube surfaces, corrosion products and particulate matter can accumulate in the form of porous deposits. The SG water contains impurities at extremely low levels (ppb). Low levels of non-volatile impurities, however, can be efficiently concentrated in crevices and sludge piles by a thermal hydraulic mechanism. The temperature gradient across the SG tube coupled with local flow starvation, produces local boiling in the sludge and crevices. Since mass transfer processes are inhibited in these geometries, the residual liquid becomes enriched in many of the species present in the SG water. The resulting concentrated solutions have been shown to be aggressive and can corrode the SG materials. This corrosion may occur under various conditions which result in different types of attack such as pitting, stress corrosion cracking, wastage and denting. A major goal of EPRI's research program has been the development of models of the concentration process and the resulting chemistry. An improved understanding should eventually allow utilities to reduce or eliminate the corrosion by the appropriate manipulation of the steam generator water chemistry and or crevice conditions. The application of these models to experimental data obtained for prototypical SG tube support crevices is described in this paper. The models adequately describe the key features of the experimental data allowing extrapolations to be made to plant conditions. (author)

  8. Modelling of dielectric polymers for energy scavenging applications

    International Nuclear Information System (INIS)

    An increasing number of scavenging applications use dielectric polymers: for instance, on the heel of a shoe, behind the knee, on a navy buoy, etc. This emerging technology has the potential to be an alternative to traditional, well-known solutions using piezoelectricity or electromagnetism. Indeed, dielectric polymers are suitable for creating flexible and innovative structures working in a quasi-static range. Nevertheless, current analytical models of dielectric polymers in generator mode are too simple and not sufficiently predictive. This paper reports a more reliable method for modelling dielectric generators. This method is a tool for designing any plane structure. It can be used to calculate performance or to optimize a given structure. Moreover, it is modular and can be adapted to any kind of dielectric material and any plane structure. The method is illustrated on a biaxial plane generator comprising 3M's VHB 4910 polymer and conductive silver grease electrodes. Experiment data are provided to validate the analytical model and thus the whole method

  9. Application of Molecular Modeling to Urokinase Inhibitors Development

    Directory of Open Access Journals (Sweden)

    V. B. Sulimov

    2014-01-01

    Full Text Available Urokinase-type plasminogen activator (uPA plays an important role in the regulation of diverse physiologic and pathologic processes. Experimental research has shown that elevated uPA expression is associated with cancer progression, metastasis, and shortened survival in patients, whereas suppression of proteolytic activity of uPA leads to evident decrease of metastasis. Therefore, uPA has been considered as a promising molecular target for development of anticancer drugs. The present study sets out to develop the new selective uPA inhibitors using computer-aided structural based drug design methods. Investigation involves the following stages: computer modeling of the protein active site, development and validation of computer molecular modeling methods: docking (SOL program, postprocessing (DISCORE program, direct generalized docking (FLM program, and the application of the quantum chemical calculations (MOPAC package, search of uPA inhibitors among molecules from databases of ready-made compounds to find new uPA inhibitors, and design of new chemical structures and their optimization and experimental examination. On the basis of known uPA inhibitors and modeling results, 18 new compounds have been designed, calculated using programs mentioned above, synthesized, and tested in vitro. Eight of them display inhibitory activity and two of them display activity about 10 μM.

  10. Application of WEAP Simulation Model to Hengshui City Water Planning

    Institute of Scientific and Technical Information of China (English)

    OJEKUNLE Z O; ZHAO Lin; LI Manzhou; YANG Zhen; TAN Xin

    2007-01-01

    Like many river basins in China, water resources in the Fudong Pai River are almost fully allocated. This paper seeks to assess and evaluate water resource problems using water evaluation and planning (WEAP) model via its application to Hengshui Basin of Fudong Pai River. This model allows the simulation and analysis of various water allocation scenarios and, above all, scenarios of users' behavior. Water demand management is one of the options discussed in detail. Simulations are proposed for diverse climatic situations from dry years to normal years and results are discussed. Within the limits of data availability, it appears that most water users are not able to meet all their requirements from the river, and that even the ecological reserve will not be fully met during certain years. But the adoption of water demand management procedures offers opportunities for remedying this situation during normal hydrological years. However, it appears that demand management alone will not suffice during dry years. Nevertheless, the ease of use of the model and its user-friendly interfaces make it particularly useful for discussions and dialogue on water resources management among stakeholders.

  11. Simulation Modeling in Plant Breeding: Principles and Applications

    Institute of Scientific and Technical Information of China (English)

    WANG Jian-kang; Wolfgang H Pfeiffer

    2007-01-01

    Conventional plant breeding largely depends on phenotypic selection and breeder's experience, therefore the breeding efficiency is low and the predictions are inaccurate. Along with the fast development in molecular biology and biotechnology, a large amount of biological data is available for genetic studies of important breeding traits in plants,which in turn allows the conduction of genotypic selection in the breeding process. However, gene information has not been effectively used in crop improvement because of the lack of appropriate tools. The simulation approach can utilize the vast and diverse genetic information, predict the cross performance, and compare different selection methods. Thus,the best performing crosses and effective breeding strategies can be identified. QuLine is a computer tool capable of defining a range, from simple to complex genetic models, and simulating breeding processes for developing final advanced lines. On the basis of the results from simulation experiments, breeders can optimize their breeding methodology and greatly improve the breeding efficiency. In this article, the underlying principles of simulation modeling in crop enhancement is initially introduced, following which several applications of QuLine are summarized, by comparing the different selection strategies, the precision parental selection, using known gene information, and the design approach in breeding. Breeding simulation allows the definition of complicated genetic models consisting of multiple alleles, pleiotropy, epistasis, and genes, by environment interaction, and provides a useful tool for breeders, to efficiently use the wide spectrum of genetic data and information available.

  12. Predictive modeling of addiction lapses in a mobile health application.

    Science.gov (United States)

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M; Isham, Andrew J; Judkins-Fisher, Chris L; Atwood, Amy K; Gustafson, David H

    2014-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-comprehensive health enhancement support system (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who recently completed residential treatment. The Weekly Check-in is a self-monitoring service, provided in A-CHESS, to track patients' recovery progress. The model showed good predictability, with the area under receiver operating characteristic curve of 0.829 in the 10-fold cross-validation and 0.912 in the external validation. The sensitivity/specificity table assists the tradeoff decisions necessary to apply the model in practice. This study moves us closer to the goal of providing lapse prediction so that patients might receive more targeted and timely support. PMID:24035143

  13. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  14. Selected developments and applications of Leontief models in industrial ecology

    International Nuclear Information System (INIS)

    extended for this study through the application of multi-objective optimization techniques and is used to explore efficient trade offs between reducing CO2 emissions and increasing global factor costs. Concluding Remarks: It has been the scope of this work to contribute to map the interdisciplinary landscape between input-output analysis and industrial ecology. The three first papers enters this landscape from the Industrial Ecology side, more specifically form the Life Cycle Assessment platform and the two latter from the input-output paradigm. The fundamental learning obtained is that the linear section of this landscape is described by Leontief models. Both Life Cycle Assessment, Mass Flow Analysis and Substance Flow Analysis etc. can be represented on the mathematical form proposed by Leontief. The input output framework offers a well- developed set of methodologies that can bridge the various sub-fields of industrial ecology addressing question related to inter-process flows. It seems that an acknowledgement of Leontief models as the base framework for the family of linear models in industrial ecology would be beneficial. Following the acknowledgement of Leontief's work comes that of Dantzig and the development of linear programming. In investigating alternative arrangements of production and combinations of technologies to produce a given good, the common practice in LCA has been total enumeration of all scenarios. This might be feasible, and for that sake desirable, for a limited amount combinations. However as the complexity and number of alternatives increases this will not be feasible. Dantzig invented Linear programming to address exactly this type of problem. The scientific foundation provided by Leontief and Dantzig has been crucial to the work in this thesis. It is my belief that the impact to industrial ecology of their legacy will increase further in the years to come. (Author)

  15. Using random forest to model the domain applicability of another random forest model.

    Science.gov (United States)

    Sheridan, Robert P

    2013-11-25

    In QSAR, a statistical model is generated from a training set of molecules (represented by chemical descriptors) and their biological activities. We will call this traditional type of QSAR model an "activity model". The activity model can be used to predict the activities of molecules not in the training set. A relatively new subfield for QSAR is domain applicability. The aim is to estimate the reliability of prediction of a specific molecule on a specific activity model. A number of different metrics have been proposed in the literature for this purpose. It is desirable to build a quantitative model of reliability against one or more of these metrics. We can call this an "error model". A previous publication from our laboratory (Sheridan J. Chem. Inf. Model., 2012, 52, 814-823.) suggested the simultaneous use of three metrics would be more discriminating than any one metric. An error model could be built in the form of a three-dimensional set of bins. When the number of metrics exceeds three, however, the bin paradigm is not practical. An obvious solution for constructing an error model using multiple metrics is to use a QSAR method, in our case random forest. In this paper we demonstrate the usefulness of this paradigm, specifically for determining whether a useful error model can be built and which metrics are most useful for a given problem. For the ten data sets and for the seven metrics we examine here, it appears that it is possible to construct a useful error model using only two metrics (TREE_SD and PREDICTED). These do not require calculating similarities/distances between the molecules being predicted and the molecules used to build the activity model, which can be rate-limiting. PMID:24152204

  16. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas; /Royal Inst. Tech., Stockholm

    2008-01-29

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e{sup {+-}}, {nu}{sub e}, {bar {nu}}{sub e}, {nu}{sub {mu}} and {bar {nu}}{sub {mu}}--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the {Delta}(1232) and the other multiple resonances with masses around 1600 MeV/c{sup 2}. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a pencil beam of

  17. Modifications and Applications of the HERMES model: June - October 2010

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2010-11-16

    The HERMES (High Explosive Response to MEchanical Stimulus) model has been developed to describe the response of energetic materials to low-velocity mechanical stimulus, referred to as HEVR (High Explosive Violent Response) or BVR (Burn to Violent Reaction). For tests performed with an HMX-based UK explosive, at sample sizes less than 200 g, the response was sometimes an explosion, but was not observed to be a detonation. The distinction between explosion and detonation can be important in assessing the effects of the HE response on nearby structures. A detonation proceeds as a supersonic shock wave supported by the release of energy that accompanies the transition from solid to high-pressure gas. For military high explosives, the shock wave velocity generally exceeds 7 km/s, and the pressure behind the shock wave generally exceeds 30 GPa. A kilogram of explosive would be converted to gas in 10 to 15 microseconds. An HEVR explosion proceeds much more slowly. Much of the explosive remains unreacted after the event. Peak pressures have been measured and calculated at less than 1 GPa, and the time for the portion of the solid that does react to form gas is about a millisecond. The explosion will, however, launch the confinement to a velocity that depends on the confinement mass, the mass of explosive converted, and the time required to form gas products. In many tests, the air blast signal and confinement velocity are comparable to those measured when an amount of explosive equal to that which is converted in an HEVR is deliberately detonated in the comparable confinement. The number of confinement fragments from an HEVR is much less than from the comparable detonation. The HERMES model comprises several submodels including a constitutive model for strength, a model for damage that includes the creation of porosity and surface area through fragmentation, an ignition model, an ignition front propagation model, and a model for burning after ignition. We have used HERMES

  18. The GRD Model for Silicate Melt Viscosity: Volcanological Applications

    Science.gov (United States)

    Russell, K.; Giordano, D.; Dingwell, D. B.

    2008-12-01

    We recently published a model for predicting the non-Arrhenian Newtonian viscosity of silicate melts as a function of temperature (T) and melt composition (X), including the volatile constituents H2O and F (Giordano et al. 2008). The non-Arrhenian T-dependence is accounted for by the VFT equation [log η = A + B/(T(K) -C)] and the model is calibrated on > 1750 measurements of melt viscosity. All compositional dependence is accommodated by 17 model coefficients embedded in the parameters B and C. The optimization assumes a common, high-T limit (A) for silicate melt viscosity and returns a value for this limit of - 4.55 (± 0.2) (e.g., log η ~ 10-4.6 Pa s) making for a total of 18 model coefficients. The effects of pressure on the silicate melt viscosity are not accounted for in this model, however, the model has the following attributes: a) it covers over fifteen log units of viscosity [10-1 to 1014 Pa s], b) it spans most of the compositional range found in naturally-occurring volcanic rocks, c) it is computationally continuous across the entire compositional and temperature spectrum of the database, and d) it is capable of accommodating both strong (near-Arrhenian T-dependence) and fragile (non-Arrhenian T-dependence) behaviour of silicate melts. Lastly, the model for melt viscosity can be used to predict other transport properties including glass transition temperatures (Tg) and melt fragility (m). Volcanic regimes feature constantly changing T-X melt conditions and, in many instances, these small changes generate strong non- linear variations in melt viscosity. The GRD model allows for accurate, continuous prediction of melt properties as a function of temperature and melt composition and, thus, is ideal for modelling transport properties in dynamic natural systems. Below we demonstrate the utility of this model with three volcanological applications: (A) We track variations in viscosity along liquid lines of descent predicted by MELTS (Ghiorso et al. 1995) and

  19. A Model of Cloud Based Application Environment for Software Testing

    CERN Document Server

    Vengattaraman, T; Baskaran, R

    2010-01-01

    Cloud computing is an emerging platform of service computing designed for swift and dynamic delivery of assured computing resources. Cloud computing provide Service-Level Agreements (SLAs) for guaranteed uptime availability for enabling convenient and on-demand network access to the distributed and shared computing resources. Though the cloud computing paradigm holds its potential status in the field of distributed computing, cloud platforms are not yet to the attention of majority of the researchers and practitioners. More specifically, still the researchers and practitioners community has fragmented and imperfect knowledge on cloud computing principles and techniques. In this context, one of the primary motivations of the work presented in this paper is to reveal the versatile merits of cloud computing paradigm and hence the objective of this work is defined to bring out the remarkable significances of cloud computing paradigm through an application environment. In this work, a cloud computing model for sof...

  20. Autonomic Model for Self-Configuring C#.NET Applications

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advances in computational technologies over the last decade, large organizations have been investing in Information Technology to automate their internal processes to cut costs and efficiently support their business projects. However, this comes to a price. Business requirements always change. Likewise, IT systems constantly evolves as developers make new versions of them, which require endless administrative manual work to customize and configure them, especially if they are being used in different contexts, by different types of users, and for different requirements. Autonomic computing was conceived to provide an answer to these ever-changing requirements. Essentially, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic model based on Venn diagram and set theory for self-configuring C#.NET applications, namely the self-customization of their GUI, ev...

  1. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  2. Urban design and modeling: applications and perspectives on GIS

    Directory of Open Access Journals (Sweden)

    Roberto Mingucci

    2013-05-01

    Full Text Available In recent years, GIS systems have evolved because of technological advancements that make possible the simultaneous management of multiple amount of information.Interesting aspects in their application concern the site documentation at the territorial scale taking advantage of CAD/BIM systems, usually working at the building scale instead.In this sense, the survey using sophisticated equipment such as laser scanners or UAV drones quickly captures data that can be enjoyed across even through new “mobile” technologies, operating in the web-based information systems context. This paper aims to investigate use and perspectives pertaining to geographic information technologies, analysis and design tools meant for modeling at different scales, referring to results of research experiences conducted at the University of Bologna.

  3. Low Dimensional Semiconductor Structures Characterization, Modeling and Applications

    CERN Document Server

    Horing, Norman

    2013-01-01

    Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included h...

  4. Multimaterial polyacrylamide: fabrication with electrohydrodynamic jet printing, applications, and modeling

    International Nuclear Information System (INIS)

    Micropatterned, multimaterial hydrogels have a wide range of applications, including the study of microenvironmental factors on cell behavior, and complex materials that rapidly change shape in response to fluid composition. This paper presents a method to fabricate microscale polyacrylamide features embedded in a second hydrogel of a different composition. An electrohydrodynamic jet (e-jet) printer was used to pattern hemispherical droplets of polyacrylamide prepolymer on a passive substrate. After photopolymerization, the droplets were backfilled with a second polyacrylamide mixture, the second mixture was polymerized and the sample was peeled off the substrate. Fluorescent and confocal microscopy confirmed multimaterial patterning, while scanning probe microscopy revealed a patterned topography with printed spots forming shallow wells. Finite element modeling was used to understand the mechanics of the formation of the topographical features during backfill and subsequent polymerization. Finally, polyacrylamide containing acrylic acid was used to demonstrate two applications of the micropatterned hydrogels: stimuli-responsive materials and patterned substrates for cell culture. The e-jet fabrication technique described here is a highly flexible, high resolution method for creating multimaterial hydrogels. (paper)

  5. CFD Modeling in Development of Renewable Energy Applications

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Chapter 1: A Multi-fluid Model to Simulate Heat and Mass Transfer in a PEM Fuel Cell. Torsten Berning, Madeleine Odgaard, Søren K. Kær Chapter 2: CFD Modeling of a Planar Solid Oxide Fuel Cell (SOFC for Clean Power Generation. Meng Ni Chapter 3: Hydrodynamics and Hydropower in the New Paradigm for a Sustainable Engineering. Helena M. Ramos, Petra A. López-Jiménez Chapter 4: Opportunities for CFD in Ejector Solar Cooling. M. Dennis Chapter 5: Three Dimensional Modelling of Flow Field Around a Horizontal Axis Wind Turbine (HAWT. Chaouki Ghenai, Armen Sargsyan, Isam Janajreh Chapter 6: Scaling Rules for Hydrodynamics and Heat Transfer in Jetting Fluidized-Bed Biomass Gasifiers. K. Zhang, J. Chang, P. Pei, H. Chen, Y. Yang Chapter 7: Investigation of Low Reynolds Number Unsteady Flow around Airfoils in Pitching, Plunging and Flapping Motions. M.R. Amiralaei, H. Alighanbari, S.M. Hashemi Chapter 8: Justification of Computational Fluid Dynamics Simulation for Flat Plate Solar Energy Collector. Mohamed Selmi, Mohammed J. Al-Khawaja, Abdulhamid Marafia Chapter 9: Comparative Performance of a 3-Bladed Airfoil Chord H-Darrieus and a 3-Bladed Straight Chord H-Darrieus Turbines using CFD. R. Gupta, Agnimitra Biswas Chapter 10: Computational Fluid Dynamics for PEM Fuel Cell Modelling. A. Iranzo, F. Rosa Chapter 11: Analysis of the Performance of PEM Fuel Cells: Tutorial of Major Functional and Constructive Characteristics using CFD Analysis. P.J. Costa Branco, J.A. Dente Chapter 12: Application of Techniques of Computational Fluid Dynamics in the Design of Bipolar Plates for PEM Fuel Cells. A.P. Manso, F.F. Marzo, J. Barranco, M. Garmendia Mujika.

  6. X-ray ablation measurements and modeling for ICF applications

    International Nuclear Information System (INIS)

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths (∼ micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation

  7. Application of isotope tracers in continental scale hydrological modeling

    International Nuclear Information System (INIS)

    Full text: Tracing isotopes in hydrological systems is becoming an important tool for hydrologist to study hydrological processes. Stable isotopes such as 2H and 18O are particularly useful since these elements are building blocks of the water molecules and behave slightly differently in phase changes and diffusion than regular water molecules. Hydrologists working on small and regional scales have demonstrated the value of stable isotope traces in various application such as distinguishing the source of surface water (old water from the ground-water pool and new water surface runoff), differentiating evaporation (from open water) from transpiration (from plants), snow and smelt glacier mixing, etc. Application of isotope tracers at large scale is far behind the regional application mostly due to the lack of isotopic data for large regions. The International Atomic Energy Agency has started a major effort inviting experts and institutions from all over the world to change this situation and promote the collection and distribution of isotopic data about various component of the hydrological cycle. IAEA and WMO (World Meteorological Organization) already established a Global Network for Isotopes in Precipitation (GNIP) and IAEA recently initiated a new effort the Global Network for Isotopes in Rivers (GNIR). The present paper attempts to utilize these emerging isotopic datasets by incorporating isotope tracing in large scale hydrological simulation. The available precipitation and river isotopic composition data are analysed in simple GIS context to demonstrate the consistency of the isotopic data with other Earth system data such as various climate forcings (air temperature, precipitation, vapor pressure, etc.) land characterisation data (land-use, soil types, river networks, etc.) and river discharge data.. After the initial GIS-based analysis, the isotopic data are tested in a modified version of a well established large scale water balance/water transport

  8. DAVE: A plug and play model for distributed multimedia application development

    Energy Technology Data Exchange (ETDEWEB)

    Mines, R.F.; Friesen, J.A.; Yang, C.L.

    1994-07-01

    This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as video conferencing, media archival, remote process control, and distance learning.

  9. Political economy models and agricultural policy formation: empirical applicability and relevance for the CAP.

    OpenAIRE

    Zee, van der, J.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in industrialised market economics. Part II (chapters 8-11) focuses on the empirical applicability of political economy models to agricultural policy formation and agricultural policy developments in the...

  10. Focuss algorithm application in kinetic compartment modeling for PET tracer

    International Nuclear Information System (INIS)

    Molecular imaging is in the process of becoming. Its application mostly depends on the molecular discovery process of imaging probes and drugs, from the mouse to the patient, from research to clinical practice. Positron emission tomography (PET) can non-invasively monitor . pharmacokinetic and functional processes of drugs in intact organisms at tracer concentrations by kinetic modeling. It has been known that for all biological systems, linear or nonlinear, if the system is injected by a tracer in a steady state, the distribution of the tracer follows the kinetics of a linear compartmental system, which has sums of exponential solutions. Based on the general compartmental description of the tracer's fate in vivo, we presented a novel kinetic modeling approach for the quantification of in vivo tracer studies with dynamic positron emission tomography (PET), which can determine a parsimonious model consisting with the measured data. This kinetic modeling technique allows for estimation of parametric images from a voxel based analysis and requires no a priori decision about the tracer's fate in vivo, instead determining the most appropriate model from the information contained within the kinetic data. Choosing a set of exponential functions, convolved with the plasma input function, as basis functions, the time activity curve of a region or a pixel can be written as a linear combination of the basis functions with corresponding coefficients. The number of non-zero coefficients returned corresponds to the model order which is related to the number of tissue compartments. The system macro parameters are simply determined using the focal underdetermined system solver (FOCUSS) algorithm. The FOCUSS algorithm is a nonparametric algorithm for finding localized energy solutions from limited data and is a recursive linear estimation procedure. FOCUSS algorithm usually converges very fast, so demands a few iterations. The effectiveness is verified by simulation and clinical

  11. Application of a theoretical model to evaluate COPD disease management

    Directory of Open Access Journals (Sweden)

    Asin Javier D

    2010-03-01

    Full Text Available Abstract Background Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD on process, intermediate and final outcomes of care in a general practice setting. Methods A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences were obtained from questionnaires and electronic registries. Results Implementation of the programme was associated with significant improvements in dyspnoea (p Conclusions The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.

  12. Application of Perceptual Filtering Models to Noisy Speech Signals Enhancement

    Directory of Open Access Journals (Sweden)

    Novlene Zoghlami

    2012-01-01

    Full Text Available This paper describes a new speech enhancement approach using perceptually based noise reduction. The proposed approach is based on the application of two perceptual filtering models to noisy speech signals: the gammatone and the gammachirp filter banks with nonlinear resolution according to the equivalent rectangular bandwidth (ERB scale. The perceptual filtering gives a number of subbands that are individually spectral weighted and modified according to two different noise suppression rules. The importance of an accurate noise estimate is related to the reduction of the musical noise artifacts in the processed speech that appears after classic subtractive process. In this context, we use continuous noise estimation algorithms. The performance of the proposed approach is evaluated on speech signals corrupted by real-world noises. Using objective tests based on the perceptual quality PESQ score and the quality rating of signal distortion (SIG, noise distortion (BAK and overall quality (OVRL, and subjective test based on the quality rating of automatic speech recognition (ASR, we demonstrate that our speech enhancement approach using filter banks modeling the human auditory system outperforms the conventional spectral modification algorithms to improve quality and intelligibility of the enhanced speech signal.

  13. Correctness of Sensor Network Applications by Software Bounded Model Checking

    Science.gov (United States)

    Werner, Frank; Faragó, David

    We investigate the application of the software bounded model checking tool CBMC to the domain of wireless sensor networks (WSNs). We automatically generate a software behavior model from a network protocol (ESAWN) implementation in a WSN development and deployment platform (TinyOS), which is used to rigorously verify the protocol. Our work is a proof of concept that automatic verification of programs of practical size (≈ 21 000 LoC) and complexity is possible with CBMC and can be integrated into TinyOS. The developer can automatically check for pointer dereference and array index out of bound errors. She can also check additional, e.g., functional, properties that she provides by assume- and assert-statements. This experience paper shows that our approach is in general feasible since we managed to verify about half of the properties. We made the verification process scalable in the size of the code by abstraction (eg, from hardware) and by simplification heuristics. The latter also achieved scalability in data type complexity for the properties that were verifiable. The others require technical advancements for complex data types within CBMC's core.

  14. Risk management modeling and its application in maritime safety

    Science.gov (United States)

    Qin, Ting-Rong; Chen, Wei-Jiong; Zeng, Xiang-Kun

    2008-12-01

    Quantified risk assessment (QRA) needs mathematicization of risk theory. However, attention has been paid almost exclusively to applications of assessment methods, which has led to neglect of research into fundamental theories, such as the relationships among risk, safety, danger, and so on. In order to solve this problem, as a first step, fundamental theoretical relationships about risk and risk management were analyzed for this paper in the light of mathematics, and then illustrated with some charts. Second, man-machine-environment-management (MMEM) theory was introduced into risk theory to analyze some properties of risk. On the basis of this, a three-dimensional model of risk management was established that includes: a goal dimension; a management dimension; an operation dimension. This goal management operation (GMO) model was explained and then emphasis was laid on the discussion of the risk flowchart (operation dimension), which lays the groundwork for further study of risk management and qualitative and quantitative assessment. Next, the relationship between Formal Safety Assessment (FSA) and Risk Management was researched. This revealed that the FSA method, which the international maritime organization (IMO) is actively spreading, comes from Risk Management theory. Finally, conclusion were made about how to apply this risk management method to concrete fields efficiently and conveniently, as well as areas where further research is required.

  15. Mathematical models for foam-diverted acidizing and their applications

    Institute of Scientific and Technical Information of China (English)

    Li Songyan; Li Zhaomin; Lin Riyi

    2008-01-01

    Foam diversion can effectively solve the problem of uneven distribution of acid in layers of different permeabilities during matrix acidizing.Based on gas trapping theory and the mass conservation equation,mathematical models were developed for foam-diverted acidizing,which can be achieved by a foam slug followed by acid injection or by continuous injection of foamed acid.The design method for foam-diverted acidizing was also given.The mathematical models were solved by a computer program.Computed results show that the total formation skin factor,wellhead pressure and bottomhole pressure increase with foam injection,but decrease with acid injection.Volume flow rate in a highpermeability layer decreases,while that in a low-permeability layer increases,thus diverting acid to the low-permeability layer from the high-permeability layer.Under the same formation conditions,for foamed acid treatment the operation was longer,and wellhead and bottomhole pressures are higher.Field application shows that foam slug can effectively block high permeability layers,and improve intake profile noticeably.

  16. Application of Stochastic Partial Differential Equations to Reservoir Property Modelling

    KAUST Repository

    Potsepaev, R.

    2010-09-06

    Existing algorithms of geostatistics for stochastic modelling of reservoir parameters require a mapping (the \\'uvt-transform\\') into the parametric space and reconstruction of a stratigraphic co-ordinate system. The parametric space can be considered to represent a pre-deformed and pre-faulted depositional environment. Existing approximations of this mapping in many cases cause significant distortions to the correlation distances. In this work we propose a coordinate free approach for modelling stochastic textures through the application of stochastic partial differential equations. By avoiding the construction of a uvt-transform and stratigraphic coordinates, one can generate realizations directly in the physical space in the presence of deformations and faults. In particular the solution of the modified Helmholtz equation driven by Gaussian white noise is a zero mean Gaussian stationary random field with exponential correlation function (in 3-D). This equation can be used to generate realizations in parametric space. In order to sample in physical space we introduce a stochastic elliptic PDE with tensor coefficients, where the tensor is related to correlation anisotropy and its variation is physical space.

  17. Applications of the International Space Station Probabilistic Risk Assessment Model

    Science.gov (United States)

    Grant, W.; Lutomski, M.

    2012-01-01

    The International Space Station (ISS) program is continuing to expand the use of Probabilistic Risk Assessments (PRAs). The use of PRAs in the ISS decision making process has proven very successful over the past 8 years. PRAs are used in the decision making process to address significant operational and design issues as well as to identify, communicate, and mitigate risks. Future PRAs are expected to have major impacts on not only the ISS, but also future NASA programs and projects. Many of these PRAs will have their foundation in the current ISS PRA model and in PRA trade studies that are being developed for the ISS Program. ISS PRAs have supported: -Development of reliability requirements for future NASA and commercial spacecraft, -Determination of inherent risk for visiting vehicles, -Evaluation of potential crew rescue scenarios, -Operational requirements and alternatives, -Planning of Extravehicular activities (EV As) and, -Evaluation of robotics operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decisions that were made.

  18. Risk management modeling and its application in maritime safety

    Institute of Scientific and Technical Information of China (English)

    QIN Ting-rong; CHEN Wei-jiong; ZENG Xiang-kun

    2008-01-01

    Quantified risk assessment (QRA) needs mathematicization of risk theory. However,attention has been paid almost exclusively to applications of assessment methods,which has led to neglect of research into fundamental theories,such as the relationships among risk,safety,danger,and so on. In order to solve this problem,as a first step,fundamental theoretical relationships about risk and risk management were analyzed for this paper in the light of mathematics,and then illustrated with some charts. Second,man-machine-environment-management (MMEM) theory was introduced into risk theory to analyze some properties of risk. On the basis of this,a three-dimensional model of risk management was established that includes: a goal dimension;a management dimension;an operation dimension. This goal management operation (GMO) model was explained and then emphasis was laid on the discussion of the risk flowchart (operation dimension),which lays the groundwork for further study of risk management and qualitative and quantitative assessment. Next,the relationship between Formal Safety Assessment (FSA) and Risk Management was researched. This revealed that the FSA method,which the international maritime organization (IMO) is actively spreading,comes from Risk Management theory. Finally,conclusion were made about how to apply this risk management method to concrete fields efficiently and conveniently,as well as areas where further research is required.

  19. Permeability of Two Parachute Fabrics - Measurements, Modeling, and Application

    Science.gov (United States)

    Cruz, Juan R.; O'Farrell, Clara; Hennings, Elsa; Runnells, Paul

    2016-01-01

    Two parachute fabrics, described by Parachute Industry Specifications PIA-C-7020D Type I and PIA-C-44378D Type I, were tested to obtain their permeabilities in air (i.e., flow-through volume of air per area per time) over the range of differential pressures from 0.146 psf (7 Pa) to 25 psf (1197 Pa). Both fabrics met their specification permeabilities at the standard differential pressure of 0.5 inch of water (2.60 psf, 124 Pa). The permeability results were transformed into an effective porosity for use in calculations related to parachutes. Models were created that related the effective porosity to the unit Reynolds number for each of the fabrics. As an application example, these models were used to calculate the total porosities for two geometrically-equivalent subscale Disk-Gap-Band (DGB) parachutes fabricated from each of the two fabrics, and tested at the same operating conditions in a wind tunnel. Using the calculated total porosities and the results of the wind tunnel tests, the drag coefficient of a geometrically-equivalent full-scale DGB operating on Mars was estimated.

  20. Finite Element Method application for modeling of PVD coatings properties

    Directory of Open Access Journals (Sweden)

    W. Sitek

    2008-04-01

    Full Text Available Purpose: The main subject of this paper is the computer simulation with the use of finite element method for determining the internal stresses in coatings Ti+TiN obtained in the magnetron PVD process on the sintered high-speed steel of the ASP 30 in different temperatures of 460, 500 and 540 °C.Design/methodology/approach: Computer simulation of stresses was carried out with the help of finite element method in ANSYS environment, and the experimental values of stresses were determined basing on the X-ray diffraction patterns.Findings: The presented model meets the initial criteria, which gives ground to the assumption about its usability for determining the stresses in coatings, employing the finite element method using the ANSYS program. The computer simulation results correlate with the experimental results.Research limitations/implications: To evaluate with more details the possibility of applying these coatings tools, further computer simulation should be concentrated on the determination of other properties of the coatings for example- microhardness.Originality/value: Presently the computer simulation is very popular and it is based on the finite element method, which allows to better understand the interdependence between parameters of process and choosing optimal solution. The possibility of application faster and faster calculation machines and coming into being many software make possible the creation of more precise models and more adequate ones to reality.