WorldWideScience

Sample records for anova models application

  1. Application of one-way ANOVA in completely randomized experiments

    Science.gov (United States)

    Wahid, Zaharah; Izwan Latiff, Ahmad; Ahmad, Kartini

    2017-12-01

    This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.

  2. Effect of fasting ramadan in diabetes control status - application of extensive diabetes education, serum creatinine with HbA1c statistical ANOVA and regression models to prevent hypoglycemia.

    Science.gov (United States)

    Aziz, Kamran M A

    2013-09-01

    Ramadan fasting is an obligatory duty for Muslims. Unique physiologic and metabolic changes occur during fasting which requires adjustments of diabetes medications. Although challenging, successful fasting can be accomplished if pre-Ramadan extensive education is provided to the patients. Current research was conducted to study effective Ramadan fasting with different OHAs/insulins without significant risk of hypoglycemia in terms of HbA1c reductions after Ramadan. ANOVA model was used to assess HbA1c levels among different education statuses. Serum creatinine was used to measure renal functions. Pre-Ramadan diabetes education with alteration of therapy and dosage adjustments for OHAs/insulin was done. Regression models for HbA1c before Ramadan with FBS before sunset were also synthesized as a tool to prevent hypoglycemia and successful Ramadan fasting in future. Out of 1046 patients, 998 patients fasted successfully without any episodes of hypoglycemia. 48 patients (4.58%) experienced hypoglycemia. Χ(2) Test for CRD/CKD with hypoglycemia was also significant (p-value Ramadan diabetes management. Some relevant patents are also outlined in this paper.

  3. Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA

    KAUST Repository

    Jung, Yoonsuh; Huang, Jianhua Z.; Hu, Jianhua

    2014-01-01

    In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.

  4. Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA

    KAUST Repository

    Jung, Yoonsuh

    2014-10-02

    In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.

  5. Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data

    Science.gov (United States)

    Luo, Zhen

    In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.

  6. INFLUENCE OF TECHNOLOGICAL PARAMETERS ON AGROTEXTILES WATER ABSORBENCY USING ANOVA MODEL

    Directory of Open Access Journals (Sweden)

    LUPU Iuliana G.

    2016-05-01

    Full Text Available Agrotextiles are now days extensively being used in horticulture, farming and other agricultural activities. Agriculture and textiles are the largest industries in the world providing basic needs such as food and clothing. Agrotextiles plays a significant role to help control environment for crop protection, eliminate variations in climate, weather change and generate optimum condition for plant growth. Water absorptive capacity is a very important property of needle-punched nonwovens used as irrigation substrate in horticulture. Nonwovens used as watering substrate distribute water uniformly and act as slight water buffer owing to the absorbent capacity. The paper analyzes the influence of needling process parameters on water absorptive capacity of needle-punched nonwovens by using ANOVA model. The model allows the identification of optimal action parameters in a shorter time and with less material expenses than by experimental research. The frequency of needle board and needle depth penetration has been used as independent variables while the water absorptive capacity as dependent variable for ANOVA regression model. Based on employed ANOVA model we have established that there is a significant influence of needling parameters on water absorbent capacity. The higher of depth needle penetration and needle board frequency, the higher is the compactness of fabric. A less porous structure has a lower water absorptive capacity.

  7. Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat

    Directory of Open Access Journals (Sweden)

    Philip W. Iversen

    2004-12-01

    Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.

  8. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  9. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    International Nuclear Information System (INIS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-01-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  10. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  11. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA

  12. Why we should use simpler models if the data allow this: relevance for ANOVA designs in experimental biology

    Directory of Open Access Journals (Sweden)

    Lazic Stanley E

    2008-07-01

    Full Text Available Abstract Background Analysis of variance (ANOVA is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. Results An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159. This will be true in general, and the reasons for this are discussed. Conclusion There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.

  13. ANOVA and ANCOVA A GLM Approach

    CERN Document Server

    Rutherford, Andrew

    2012-01-01

    Provides an in-depth treatment of ANOVA and ANCOVA techniques from a linear model perspective ANOVA and ANCOVA: A GLM Approach provides a contemporary look at the general linear model (GLM) approach to the analysis of variance (ANOVA) of one- and two-factor psychological experiments. With its organized and comprehensive presentation, the book successfully guides readers through conventional statistical concepts and how to interpret them in GLM terms, treating the main single- and multi-factor designs as they relate to ANOVA and ANCOVA. The book begins with a brief history of the separate dev

  14. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    Science.gov (United States)

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The

  15. Foliar Sprays of Citric Acid and Malic Acid Modify Growth, Flowering, and Root to Shoot Ratio of Gazania (Gazania rigens L.: A Comparative Analysis by ANOVA and Structural Equations Modeling

    Directory of Open Access Journals (Sweden)

    Majid Talebi

    2014-01-01

    Full Text Available Foliar application of two levels of citric acid and malic acid (100 or 300 mg L−1 was investigated on flower stem height, plant height, flower performance and yield indices (fresh yield, dry yield and root to shoot ratio of Gazania. Distilled water was applied as control treatment. Multivariate analysis revealed that while the experimental treatments had no significant effect on fresh weight and the flower count, the plant dry weight was significantly increased by 300 mg L−1 malic acid. Citric acid at 100 and 300 mg L−1 and 300 mg L−1 malic acid increased the root fresh weight significantly. Both the plant height and peduncle length were significantly increased in all applied levels of citric acid and malic acid. The display time of flowers on the plant increased in all treatments compared to control treatment. The root to shoot ratio was increased significantly in 300 mg L−1 citric acid compared to all other treatments. These findings confirm earlier reports that citric acid and malic acid as environmentally sound chemicals are effective on various aspects of growth and development of crops. Structural equations modeling is used in parallel to ANOVA to conclude the factor effects and the possible path of effects.

  16. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  17. ANOVA for the behavioral sciences researcher

    CERN Document Server

    Cardinal, Rudolf N

    2013-01-01

    This new book provides a theoretical and practical guide to analysis of variance (ANOVA) for those who have not had a formal course in this technique, but need to use this analysis as part of their research.From their experience in teaching this material and applying it to research problems, the authors have created a summary of the statistical theory underlying ANOVA, together with important issues, guidance, practical methods, references, and hints about using statistical software. These have been organized so that the student can learn the logic of the analytical techniques but also use the

  18. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  19. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  20. Using the multiple regression analysis with respect to ANOVA and 3D mapping to model the actual performance of PEM (proton exchange membrane) fuel cell at various operating conditions

    International Nuclear Information System (INIS)

    Al-Hadeethi, Farqad; Al-Nimr, Moh'd; Al-Safadi, Mohammad

    2015-01-01

    The performance of PEM (proton exchange membrane) fuel cell was experimentally investigated at three temperatures (30, 50 and 70 °C), four flow rates (5, 10, 15 and 20 ml/min) and two flow patterns (co-current and counter current) in order to generate two correlations using multiple regression analysis with respect to ANOVA. Results revealed that increasing the temperature for co-current and counter current flow patterns will increase both hydrogen and oxygen diffusivities, water management and membrane conductivity. The derived mathematical correlations and three dimensional mapping (i.e. surface response) for the co-current and countercurrent flow patterns showed that there is a clear interaction among the various variables (temperatures and flow rates). - Highlights: • Generating mathematical correlations using multiple regression analysis with respect to ANOVA for the performance of the PEM fuel cell. • Using the 3D mapping to diagnose the optimum performance of the PEM fuel cell at the given operating conditions. • Results revealed that increasing the flow rate had direct influence on the consumption of oxygen. • Results assured that increasing the temperature in co-current and counter current flow patterns increases the performance of PEM fuel cell.

  1. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  2. An ANOVA approach for statistical comparisons of brain networks.

    Science.gov (United States)

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  3. ANOVA-HDMR structure of the higher order nodal diffusion solution

    International Nuclear Information System (INIS)

    Bokov, P. M.; Prinsloo, R. H.; Tomasevic, D. I.

    2013-01-01

    Nodal diffusion methods still represent a standard in global reactor calculations, but employ some ad-hoc approximations (such as the quadratic leakage approximation) which limit their accuracy in cases where reference quality solutions are sought. In this work we solve the nodal diffusion equations utilizing the so-called higher-order nodal methods to generate reference quality solutions and to decompose the obtained solutions via a technique known as High Dimensional Model Representation (HDMR). This representation and associated decomposition of the solution provides a new formulation of the transverse leakage term. The HDMR structure is investigated via the technique of Analysis of Variance (ANOVA), which indicates why the existing class of transversely-integrated nodal methods prove to be so successful. Furthermore, the analysis leads to a potential solution method for generating reference quality solutions at a much reduced calculational cost, by applying the ANOVA technique to the full higher order solution. (authors)

  4. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed...

  5. ANOVA parameters influence in LCF experimental data and simulation results

    Directory of Open Access Journals (Sweden)

    Vercelli A.

    2010-06-01

    Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo

  6. Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism

    OpenAIRE

    Arias-Castro, Ery; Candès, Emmanuel J.; Plan, Yaniv

    2011-01-01

    Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose we have $p$ covariates and that under the alternative, the response only depends upon the order of $p^{1-\\alpha}$ of those, $0\\le\\alpha\\le1$...

  7. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  8. Prediction and Control of Cutting Tool Vibration in Cnc Lathe with Anova and Ann

    Directory of Open Access Journals (Sweden)

    S. S. Abuthakeer

    2011-06-01

    Full Text Available Machining is a complex process in which many variables can deleterious the desired results. Among them, cutting tool vibration is the most critical phenomenon which influences dimensional precision of the components machined, functional behavior of the machine tools and life of the cutting tool. In a machining operation, the cutting tool vibrations are mainly influenced by cutting parameters like cutting speed, depth of cut and tool feed rate. In this work, the cutting tool vibrations are controlled using a damping pad made of Neoprene. Experiments were conducted in a CNC lathe where the tool holder is supported with and without damping pad. The cutting tool vibration signals were collected through a data acquisition system supported by LabVIEW software. To increase the buoyancy and reliability of the experiments, a full factorial experimental design was used. Experimental data collected were tested with analysis of variance (ANOVA to understand the influences of the cutting parameters. Empirical models have been developed using analysis of variance (ANOVA. Experimental studies and data analysis have been performed to validate the proposed damping system. Multilayer perceptron neural network model has been constructed with feed forward back-propagation algorithm using the acquired data. On the completion of the experimental test ANN is used to validate the results obtained and also to predict the behavior of the system under any cutting condition within the operating range. The onsite tests show that the proposed system reduces the vibration of cutting tool to a greater extend.

  9. Estimating linear effects in ANOVA designs: the easy way.

    Science.gov (United States)

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  10. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  11. Mathematical modeling with multidisciplinary applications

    CERN Document Server

    Yang, Xin-She

    2013-01-01

    Features mathematical modeling techniques and real-world processes with applications in diverse fields Mathematical Modeling with Multidisciplinary Applications details the interdisciplinary nature of mathematical modeling and numerical algorithms. The book combines a variety of applications from diverse fields to illustrate how the methods can be used to model physical processes, design new products, find solutions to challenging problems, and increase competitiveness in international markets. Written by leading scholars and international experts in the field, the

  12. Finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.

  13. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor...... be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....

  14. Modeling Philosophies and Applications

    Science.gov (United States)

    All models begin with a framework and a set of assumptions and limitations that go along with that framework. In terms of fracing and RA, there are several places where models and parameters must be chosen to complete hazard identification.

  15. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...... engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found...

  16. Multilevel models applications using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James F

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readers to understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. It is at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®. Examples are drawn from analysis of real-world research data.

  17. MARKETING MODELS APPLICATION EXPERIENCE

    Directory of Open Access Journals (Sweden)

    A. Yu. Rymanov

    2011-01-01

    Full Text Available Marketing models are used for the assessment of such marketing elements as sales volume, market share, market attractiveness, advertizing costs, product pushing and selling, profit, profitableness. Classification of buying process decision taking models is presented. SWOT- and GAPbased models are best for selling assessments. Lately, there is a tendency to transfer from the assessment on the ba-sis of financial indices to that on the basis of those non-financial. From the marketing viewpoint, most important are long-term company activities and consumer drawingmodels as well as market attractiveness operative models.

  18. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    and the development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil...... the steady state, distributed behaviour of a short-path evaporator....

  19. Optimization of Parameters for Manufacture Nanopowder Bioceramics at Machine Pulverisette 6 by Taguchi and ANOVA Method

    Science.gov (United States)

    Van Hoten, Hendri; Gunawarman; Mulyadi, Ismet Hari; Kurniawan Mainil, Afdhal; Putra, Bismantoloa dan

    2018-02-01

    This research is about manufacture nanopowder Bioceramics from local materials used Ball Milling for biomedical applications. Source materials for the manufacture of medicines are plants, animal tissues, microbial structures and engineering biomaterial. The form of raw material medicines is a powder before mixed. In the case of medicines, research is to find sources of biomedical materials that will be in the nanoscale powders can be used as raw material for medicine. One of the biomedical materials that can be used as raw material for medicine is of the type of bioceramics is chicken eggshells. This research will develop methods for manufacture nanopowder material from chicken eggshells with Ball Milling using the Taguchi method and ANOVA. Eggshell milled using a variation of Milling rate on 150, 200 and 250 rpm, the time variation of 1, 2 and 3 hours and variations the grinding balls to eggshell powder weight ratio (BPR) 1: 6, 1: 8, 1: 10. Before milled with Ball Milling crushed eggshells in advance and calcinate to a temperature of 900°C. After the milled material characterization of the fine powder of eggshell using SEM to see its size. The result of this research is optimum parameter of Taguchi Design analysis that is 250 rpm milling rate, 3 hours milling time and BPR is 1: 6 with the average eggshell powder size is 1.305 μm. Milling speed, milling time and ball to powder weight of ratio have contribution successively equal to 60.82%, 30.76% and 6.64% by error equal to 1.78%.

  20. Insertion Modeling and Its Applications

    OpenAIRE

    Alexander Letichevsky; Oleksandr Letychevskyi; Vladimir Peschanenko

    2016-01-01

    The paper relates to the theoretical and practical aspects of insertion modeling. Insertion modeling is a theory of agents and environments interaction where an environment is considered as agent with a special insertion function. The main notions of insertion modeling are presented. Insertion Modeling System is described as a tool for development of different kinds of insertion machines. The research and industrial applications of Insertion Modeling System are presented.

  1. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    to be applicable in organisations assigning a high importance to one or more factors that are known to be impacted by RD, while also experiencing a high level of occurrence of this factor. The RDAM supplements existing maturity models and metrics to provide a comprehensive set of data to support management......This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities...

  2. Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs

    Science.gov (United States)

    Pierce, Charles A.; Block, Richard A.; Aguinis, Herman

    2004-01-01

    The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…

  3. Use of "t"-Test and ANOVA in Career-Technical Education Research

    Science.gov (United States)

    Rojewski, Jay W.; Lee, In Heok; Gemici, Sinan

    2012-01-01

    Use of t-tests and analysis of variance (ANOVA) procedures in published research from three scholarly journals in career and technical education (CTE) during a recent 5-year period was examined. Information on post hoc analyses, reporting of effect size, alpha adjustments to account for multiple tests, power, and examination of assumptions…

  4. Group-wise ANOVA simultaneous component analysis for designed omics experiments

    NARCIS (Netherlands)

    Saccenti, Edoardo; Smilde, Age K.; Camacho, José

    2018-01-01

    Introduction: Modern omics experiments pertain not only to the measurement of many variables but also follow complex experimental designs where many factors are manipulated at the same time. This data can be conveniently analyzed using multivariate tools like ANOVA-simultaneous component analysis

  5. Human mobility: Models and applications

    Science.gov (United States)

    Barbosa, Hugo; Barthelemy, Marc; Ghoshal, Gourab; James, Charlotte R.; Lenormand, Maxime; Louail, Thomas; Menezes, Ronaldo; Ramasco, José J.; Simini, Filippo; Tomasini, Marcello

    2018-03-01

    Recent years have witnessed an explosion of extensive geolocated datasets related to human movement, enabling scientists to quantitatively study individual and collective mobility patterns, and to generate models that can capture and reproduce the spatiotemporal structures and regularities in human trajectories. The study of human mobility is especially important for applications such as estimating migratory flows, traffic forecasting, urban planning, and epidemic modeling. In this survey, we review the approaches developed to reproduce various mobility patterns, with the main focus on recent developments. This review can be used both as an introduction to the fundamental modeling principles of human mobility, and as a collection of technical methods applicable to specific mobility-related problems. The review organizes the subject by differentiating between individual and population mobility and also between short-range and long-range mobility. Throughout the text the description of the theory is intertwined with real-world applications.

  6. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  7. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    Science.gov (United States)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  8. ANOVA Based Approch for Efficient Customer Recognition: Dealing with Common Names

    OpenAIRE

    Saberi , Morteza; Saberi , Zahra

    2015-01-01

    Part 2: Artificial Intelligence for Knowledge Management; International audience; This study proposes an Analysis of Variance (ANOVA) technique that focuses on the efficient recognition of customers with common names. The continuous improvement of Information and communications technologies (ICT) has led customers to have new expectations and concerns from their related organization. These new expectations bring various difficulties for organizations’ help desk to meet their customers’ needs....

  9. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...

  10. Association models for petroleum applications

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios

    2013-01-01

    Thermodynamics plays an important role in many applications in the petroleum industry, both upstream and downstream, ranging from flow assurance, (enhanced) oil recovery and control of chemicals to meet production and environmental regulations. There are many different applications in the oil & gas...... industry, thus thermodynamic data (phase behaviour, densities, speed of sound, etc) are needed to study a very diverse range of compounds in addition to the petroleum ones (CO2, H2S, water, alcohols, glycols, mercaptans, mercury, asphaltenes, waxes, polymers, electrolytes, biofuels, etc) within a very....... Such association models have been, especially over the last 20 years, proved to be very successful in predicting many thermodynamic properties in the oil & gas industry. They have not so far replaced cubic equations of state, but the results obtained by using these models are very impressive in many cases, e...

  11. Selected Tether Applications Cost Model

    Science.gov (United States)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  12. ANOVA IN MARKETING RESEARCH OF CONSUMER BEHAVIOR OF DIFFERENT CATEGORIES IN GEORGIAN MARKET

    Directory of Open Access Journals (Sweden)

    NUGZAR TODUA

    2015-03-01

    Full Text Available Consumer behavior research was conducted on bank services and (non-alcohol soft drinks. Based on four different currencies and ten services there are analyses made on bank clients’ distribution by bank services and currencies, percentage distribution by bank services, percentage distribution of bank services by currencies. Similar results are also received in case of ten soft drinks with their five characteristics: consumers quantities split by types of soft drinks and attributes; Attributes percentage split by types of soft drinks; Types of soft drinks percentage split by attributes. With usage of ANOVA, based on the marketing research outcomes it is concluded that bank clients’ total quantities i.e. populations’ unknown mean scores do not differ from each other. In the soft drinks research case consumers’ total quantities i.e. populations’ unknown mean scores vary by characteristics

  13. Integration of design applications with building models

    DEFF Research Database (Denmark)

    Eastman, C. M.; Jeng, T. S.; Chowdbury, R.

    1997-01-01

    This paper reviews various issues in the integration of applications with a building model... (Truncated.)......This paper reviews various issues in the integration of applications with a building model... (Truncated.)...

  14. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  15. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  16. Investigation of flood pattern using ANOVA statistic and remote sensing in Malaysia

    International Nuclear Information System (INIS)

    Ya'acob, Norsuzila; Ismail, Nor Syazwani; Mustafa, Norfazira; Yusof, Azita Laily

    2014-01-01

    Flood is an overflow or inundation that comes from river or other body of water and causes or threatens damages. In Malaysia, there are no formal categorization of flood but often broadly categorized as monsoonal, flash or tidal floods. This project will be focus on flood causes by monsoon. For the last few years, the number of extreme flood was occurred and brings great economic impact. The extreme weather pattern is the main sector contributes for this phenomenon. In 2010, several districts in the states of Kedah neighbour-hoods state have been hit by floods and it is caused by tremendous weather pattern. During this tragedy, the ratio of the rainfalls volume was not fixed for every region, and the flood happened when the amount of water increase rapidly and start to overflow. This is the main objective why this project has been carried out, and the analysis data has been done from August until October in 2010. The investigation was done to find the possibility correlation pattern parameters related to the flood. ANOVA statistic was used to calculate the percentage of parameters was involved and Regression and correlation calculate the strength of coefficient among parameters related to the flood while remote sensing image was used for validation between the calculation accuracy. According to the results, the prediction is successful as the coefficient of relation in flood event is 0.912 and proved by Terra-SAR image on 4th November 2010. The rates of change in weather pattern give the impact to the flood

  17. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The result indicated…

  18. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  19. Extensions and applications of degradation modeling

    International Nuclear Information System (INIS)

    Hsu, F.; Subudhi, M.; Samanta, P.K.; Vesely, W.E.

    1991-01-01

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, the authors discuss some of the extensions and applications of degradation modeling. The extensions and applications of the degradation modeling approaches discussed are: (a) theoretical developments to study reliability effects of different maintenance strategies and policies, (b) relating aging-failure rate to degradation rate, and (c) application to a continuously operating component

  20. A Classification of PLC Models and Applications

    NARCIS (Netherlands)

    Mader, Angelika H.; Boel, R.; Stremersch, G.

    In the past years there is an increasing interest in analysing PLC applications with formal methods. The first step to this end is to get formal models of PLC applications. Meanwhile, various models for PLCs have already been introduced in the literature. In our paper we discuss several

  1. Homogeneity tests for variances and mean test under heterogeneity conditions in a single way ANOVA method

    International Nuclear Information System (INIS)

    Morales P, J.R.; Avila P, P.

    1996-01-01

    If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)

  2. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  3. Applications and extensions of degradation modeling

    International Nuclear Information System (INIS)

    Hsu, F.; Subudhi, M.; Samanta, P.K.; Vesely, W.E.

    1991-01-01

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs

  4. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Columbus, OH (United States)

    1991-12-31

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  5. Applications and extensions of degradation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, F.; Subudhi, M.; Samanta, P.K. (Brookhaven National Lab., Upton, NY (United States)); Vesely, W.E. (Science Applications International Corp., Columbus, OH (United States))

    1991-01-01

    Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs.

  6. Application of some turbulence models

    International Nuclear Information System (INIS)

    Ushijima, Sho; Kato, Masanobu; Fujimoto, Ken; Moriya, Shoichi

    1985-01-01

    In order to predict numerically the thermal stratification and the thermal striping phenomena in pool-type FBRs, it is necessary to simulate adequately various turbulence properties of flows with good turbulence models. This report presents numerical simulations of two dimensional isothermal steady flows in a rectangular plenum using three types of turbulence models. Three models are general k-ε model and two Reynolds stress models. The agreements of these results are examined and the properties of these models are compared. The main results are summarized as follows. (1) Concerning the mean velocity distributions, although a little differences exist, all results of three models agree with experimental values. (2) It can be found that non-isotropy of normal Reynolds stresses (u' 2 , v' 2 ) distributions is qwite well simulated by two Reynolds stress models, but not adequately by k-ε model, shear Reynolds stress (-u', v') distribations of three models have little differences and agree good with experiments. (3) Balances of the various terms of Reynolds stress equations are examined. Comparing the results obtained by analyses and those of previous experiments, both distributions show qualitative agreements. (author)

  7. Photocell modelling for thermophotovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Mayor, J -C; Durisch, W; Grob, B; Panitz, J -C [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Goal of the modelling described here is the extrapolation of the performance characteristics of solar photocells to TPV working conditions. The model accounts for higher flux of radiation and for the higher temperatures reached in TPV converters. (author) 4 figs., 1 tab., 2 refs.

  8. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  9. Mathematical Ship Modeling for Control Applications

    DEFF Research Database (Denmark)

    Perez, Tristan; Blanke, Mogens

    2002-01-01

    In this report, we review the models for describing the motion of a ship in four degrees of freedom suitable for control applications. We present the hydrodynamic models of two ships: a container and a multi-role naval vessel. The models are based on experimental results in the four degrees...

  10. The MVP Model: Overview and Application

    Science.gov (United States)

    Keller, John M.

    2017-01-01

    This chapter contains an overview of the MVP model that is used as a basis for the other chapters in this issue. It also contains a description of key steps in the ARCS-V design process that is derived from the MVP model and a summary of a design-based research study illustrating the application of the ARCS-V model.

  11. The Influencing Factor Analysis on the Performance Evaluation of Assembly Line Balancing Problem Level 1 (SALBP-1) Based on ANOVA Method

    Science.gov (United States)

    Chen, Jie; Hu, Jiangnan

    2017-06-01

    Industry 4.0 and lean production has become the focus of manufacturing. A current issue is to analyse the performance of the assembly line balancing. This study focus on distinguishing the factors influencing the assembly line balancing. The one-way ANOVA method is applied to explore the significant degree of distinguished factors. And regression model is built to find key points. The maximal task time (tmax ), the quantity of tasks (n), and degree of convergence of precedence graph (conv) are critical for the performance of assembly line balancing. The conclusion will do a favor to the lean production in the manufacturing.

  12. Dynamic programming models and applications

    CERN Document Server

    Denardo, Eric V

    2003-01-01

    Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, more. 1982 edition.

  13. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  14. Constrained statistical inference : sample-size tables for ANOVA and regression

    NARCIS (Netherlands)

    Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves

    2015-01-01

    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and

  15. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  16. HTGR Application Economic Model Users' Manual

    International Nuclear Information System (INIS)

    Gandrik, A.M.

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  17. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  18. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  19. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  20. Applications of computer modeling to fusion research

    International Nuclear Information System (INIS)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling

  1. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  2. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  3. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  4. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  5. Degenerate RFID Channel Modeling for Positioning Applications

    Directory of Open Access Journals (Sweden)

    A. Povalac

    2012-12-01

    Full Text Available This paper introduces the theory of channel modeling for positioning applications in UHF RFID. It explains basic parameters for channel characterization from both the narrowband and wideband point of view. More details are given about ranging and direction finding. Finally, several positioning scenarios are analyzed with developed channel models. All the described models use a degenerate channel, i.e. combined signal propagation from the transmitter to the tag and from the tag to the receiver.

  6. Modelling of Tape Casting for Ceramic Applications

    DEFF Research Database (Denmark)

    Jabbari, Masoud

    was increased by improving the steady state model with a quasi-steady state analytical model. In order to control the most important process parameter, tape thickness, the two-doctor blade configuration was also modeled analytically. The model was developed to control the tape thickness based on the machine...... for magnetic refrigeration applications. Numerical models were developed to track the migration of the particles inside the ceramic slurry. The results showed the presence of some areas inside the ceramic in which the concentration of the particles is higher compared to other parts, creating the resulting...

  7. Modeling Medical Services with Mobile Health Applications

    Directory of Open Access Journals (Sweden)

    Zhenfei Wang

    2018-01-01

    Full Text Available The rapid development of mobile health technology (m-Health provides unprecedented opportunities for improving health services. As the bridge between doctors and patients, mobile health applications enable patients to communicate with doctors through their smartphones, which is becoming more and more popular among people. To evaluate the influence of m-Health applications on the medical service market, we propose a medical service equilibrium model. The model can balance the supply of doctors and demand of patients and reflect possible options for both doctors and patients with or without m-Health applications in the medical service market. In the meantime, we analyze the behavior of patients and the activities of doctors to minimize patients’ full costs of healthcare and doctors’ futility. Then, we provide a resolution algorithm through mathematical reasoning. Lastly, based on artificially generated dataset, experiments are conducted to evaluate the medical services of m-Health applications.

  8. Optimum Combination and Effect Analysis of Piezoresistor Dimensions in Micro Piezoresistive Pressure Sensor Using Design of Experiments and ANOVA: a Taguchi Approach

    Directory of Open Access Journals (Sweden)

    Kirankumar B. Balavalad

    2017-04-01

    Full Text Available Piezoresistive (PZR pressure sensors have gained importance because of their robust construction, high sensitivity and good linearity. The conventional PZR pressure sensor consists of 4 piezoresistors placed on diaphragm and are connected in the form of Wheatstone bridge. These sensors convert stress applied on them into change in resistance, which is quantified into voltage using Wheatstone bridge mechanism. It is observed form the literature that, the dimensions of piezoresistors are very crucial in the performance of the piezoresistive pressure sensor. This paper presents, a novel mechanism of finding best combinations and effect of individual piezoresistors dimensions viz., Length, Width and Thickness, using DoE and ANOVA (Analysis of Variance method, following Taguchi experimentation approach. The paper presents a unique method to find optimum combination of piezoresistors dimensions and also clearly illustrates the effect the dimensions on the output of the sensor. The optimum combinations and the output response of sensor is predicted using DoE and the validation simulation is done. The result of the validation simulation is compared with the predicted value of sensor response i.e., V. Predicted value of V is 1.074 V and the validation simulation gave the response for V as 1.19 V. This actually validates that the model (DoE and ANOVA is adequate in describing V in terms of the variables defined.

  9. Artificial Immune Networks: Models and Applications

    Directory of Open Access Journals (Sweden)

    Xian Shen

    2008-06-01

    Full Text Available Artificial Immune Systems (AIS, which is inspired by the nature immune system, has been applied for solving complex computational problems in classification, pattern rec- ognition, and optimization. In this paper, the theory of the natural immune system is first briefly introduced. Next, we compare some well-known AIS and their applications. Several representative artificial immune networks models are also dis- cussed. Moreover, we demonstrate the applications of artificial immune networks in various engineering fields.

  10. A model for assessment of telemedicine applications

    DEFF Research Database (Denmark)

    Kidholm, Kristian; Ekeland, Anne Granstrøm; Jensen, Lise Kvistgaard

    2012-01-01

    the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study.......Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...

  11. Application Note: Power Grid Modeling With Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  12. Sparse Multivariate Modeling: Priors and Applications

    DEFF Research Database (Denmark)

    Henao, Ricardo

    This thesis presents a collection of statistical models that attempt to take advantage of every piece of prior knowledge available to provide the models with as much structure as possible. The main motivation for introducing these models is interpretability since in practice we want to be able...... a general yet self-contained description of every model in terms of generative assumptions, interpretability goals, probabilistic formulation and target applications. Case studies, benchmark results and practical details are also provided as appendices published elsewhere, containing reprints of peer...

  13. Crop modeling applications in agricultural water management

    Science.gov (United States)

    Kisekka, Isaya; DeJonge, Kendall C.; Ma, Liwang; Paz, Joel; Douglas-Mankin, Kyle R.

    2017-01-01

    This article introduces the fourteen articles that comprise the “Crop Modeling and Decision Support for Optimizing Use of Limited Water” collection. This collection was developed from a special session on crop modeling applications in agricultural water management held at the 2016 ASABE Annual International Meeting (AIM) in Orlando, Florida. In addition, other authors who were not able to attend the 2016 ASABE AIM were also invited to submit papers. The articles summarized in this introductory article demonstrate a wide array of applications in which crop models can be used to optimize agricultural water management. The following section titles indicate the topics covered in this collection: (1) evapotranspiration modeling (one article), (2) model development and parameterization (two articles), (3) application of crop models for irrigation scheduling (five articles), (4) coordinated water and nutrient management (one article), (5) soil water management (two articles), (6) risk assessment of water-limited irrigation management (one article), and (7) regional assessments of climate impact (two articles). Changing weather and climate, increasing population, and groundwater depletion will continue to stimulate innovations in agricultural water management, and crop models will play an important role in helping to optimize water use in agriculture.

  14. The influence of speed abilities and technical skills in early adolescence on adult success in soccer: A long-term prospective analysis using ANOVA and SEM approaches

    Science.gov (United States)

    2017-01-01

    Several talent development programs in youth soccer have implemented motor diagnostics measuring performance factors. However, the predictive value of such tests for adult success is a controversial topic in talent research. This prospective cohort study evaluated the long-term predictive value of 1) motor tests and 2) players’ speed abilities (SA) and technical skills (TS) in early adolescence. The sample consisted of 14,178 U12 players from the German talent development program. Five tests (sprint, agility, dribbling, ball control, shooting) were conducted and players’ height, weight as well as relative age were assessed at nationwide diagnostics between 2004 and 2006. In the 2014/15 season, the players were then categorized as professional (n = 89), semi-professional (n = 913), or non-professional players (n = 13,176), indicating their adult performance level (APL). The motor tests’ prognostic relevance was determined using ANOVAs. Players’ future success was predicted by a logistic regression threshold model. This structural equation model comprised a measurement model with the motor tests and two correlated latent factors, SA and TS, with simultaneous consideration for the manifest covariates height, weight and relative age. Each motor predictor and anthropometric characteristic discriminated significantly between the APL (p < .001; η2 ≤ .02). The threshold model significantly predicted the APL (R2 = 24.8%), and in early adolescence the factor TS (p < .001) seems to have a stronger effect on adult performance than SA (p < .05). Both approaches (ANOVA, SEM) verified the diagnostics’ predictive validity over a long-term period (≈ 9 years). However, because of the limited effect sizes, the motor tests’ prognostic relevance remains ambiguous. A challenge for future research lies in the integration of different (e.g., person-oriented or multilevel) multivariate approaches that expand beyond the “traditional” topic of single tests’ predictive

  15. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  16. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  17. Ionospheric Modeling for Precise GNSS Applications

    NARCIS (Netherlands)

    Memarzadeh, Y.

    2009-01-01

    The main objective of this thesis is to develop a procedure for modeling and predicting ionospheric Total Electron Content (TEC) for high precision differential GNSS applications. As the ionosphere is a highly dynamic medium, we believe that to have a reliable procedure it is necessary to transfer

  18. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  19. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  20. GSTARS computer models and their applications, Part II: Applications

    Science.gov (United States)

    Simoes, F.J.M.; Yang, C.T.

    2008-01-01

    In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  1. Absolute variation of the mechanical characteristics of halloysite reinforced polyurethane nanocomposites complemented by Taguchi and ANOVA approaches

    Science.gov (United States)

    Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.

    The variation of the results of the mechanical properties of halloysite nanotubes (HNTs) reinforced thermoplastic polyurethane (TPU) at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy Taguchi method orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young's modulus, and tensile strain (known as responses). The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by Taguchi method and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%), mixing temperature of 190 °C (excluding 200 °C and 210 °C), and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm). In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of Taguchi method.

  2. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  3. PRACTICAL APPLICATION OF A MODEL FOR ASSESSING

    Directory of Open Access Journals (Sweden)

    Petr NOVOTNÝ

    2015-12-01

    Full Text Available Rail transport is an important sub-sector of transport infrastructure. Disruption of its operation due to emergencies can result in a reduction in functional parameters of provided services with consequent impacts on society. Identification of critical elements of this system enables its timely and effective protection. On that ground, the article presents a draft model for assessing the criticality of railway infrastructure elements. This model uses a systems approach and multicriteria semi-quantitative analysis with weighted criteria for calculating the criticality of individual elements of the railway infrastructure. In the conclusion, it presents a practical application of the proposed model including the discussion of results.

  4. Multilevel modelling: Beyond the basic applications.

    Science.gov (United States)

    Wright, Daniel B; London, Kamala

    2009-05-01

    Over the last 30 years statistical algorithms have been developed to analyse datasets that have a hierarchical/multilevel structure. Particularly within developmental and educational psychology these techniques have become common where the sample has an obvious hierarchical structure, like pupils nested within a classroom. We describe two areas beyond the basic applications of multilevel modelling that are important to psychology: modelling the covariance structure in longitudinal designs and using generalized linear multilevel modelling as an alternative to methods from signal detection theory (SDT). Detailed code for all analyses is described using packages for the freeware R.

  5. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  6. Teaching renewable energy using online PBL in investigating its effect on behaviour towards energy conservation among Malaysian students: ANOVA repeated measures approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.

  7. Reputation based security model for android applications

    OpenAIRE

    Tesfay, Welderufael Berhane; Booth, Todd; Andersson, Karl

    2012-01-01

    The market for smart phones has been booming in the past few years. There are now over 400,000 applications on the Android market. Over 10 billion Android applications have been downloaded from the Android market. Due to the Android popularity, there are now a large number of malicious vendors targeting the platform. Many honest end users are being successfully hacked on a regular basis. In this work, a cloud based reputation security model has been proposed as a solution which greatly mitiga...

  8. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  9. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  10. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  11. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  12. Absolute variation of the mechanical characteristics of halloysite reinforced polyurethane nanocomposites complemented by Taguchi and ANOVA approaches

    Directory of Open Access Journals (Sweden)

    Tayser Sumer Gaaz

    Full Text Available The variation of the results of the mechanical properties of halloysite nanotubes (HNTs reinforced thermoplastic polyurethane (TPU at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy Taguchi method orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young’s modulus, and tensile strain (known as responses. The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by Taguchi method and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%, mixing temperature of 190 °C (excluding 200 °C and 210 °C, and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm. In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of Taguchi method. Keywords: Nanocomposite, Design-of-experiment, Taguchi optimization method, Mechanical properties

  13. Application of photometric models to asteroids

    International Nuclear Information System (INIS)

    Bowell, E.; Dominque, D.; Hapke, B.

    1989-01-01

    The way an asteroid or other atmosphereless solar system body varies in brightness in response to changing illumination and viewing geometry depends in a very complicated way on the physical and optical properties of its surface and on its overall shape. The authors summarize the formulation and application of recent photometric models by Hapke and by Lumme and Bowell. In both models, the brightness of a rough and porous surface is parametrized in terms of the optical properties of individual particles, by shadowing between particles, and by the way in which light scattered among collections of particles. Both models succeed in their goal of fitting the observed photometric behavior of a wide variety of bodies, but neither has led to a very complete understanding of the properties of asteroid regoliths, primarily because in most cases the parameters in the present models cannot be adequately constrained by observations of integral brightness alone over a restricted range of phase angles

  14. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  15. Chapter 5: Summary of model application

    International Nuclear Information System (INIS)

    1995-01-01

    This chapter provides a brief summary of the model applications described in Volume III of the Final Report. This chapter dealt with the selected water management regimes; ground water flow regimes; agriculture; ground water quality; hydrodynamics, sediment transport and water quality in the Danube; hydrodynamics, sediment transport and water quality in the river branch system; hydrodynamics, sediment transport and water quality in the Hrusov reservoir and with ecology in this Danube area

  16. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  17. Wave model downscaling for coastal applications

    Science.gov (United States)

    Valchev, Nikolay; Davidan, Georgi; Trifonova, Ekaterina; Andreeva, Nataliya

    2010-05-01

    Downscaling is a suitable technique for obtaining high-resolution estimates from relatively coarse-resolution global models. Dynamical and statistical downscaling has been applied to the multidecadal simulations of ocean waves. Even as large-scale variability might be plausibly estimated from these simulations, their value for the small scale applications such as design of coastal protection structures and coastal risk assessment is limited due to their relatively coarse spatial and temporal resolutions. Another advantage of the high resolution wave modeling is that it accounts for shallow water effects. Therefore, it can be used for both wave forecasting at specific coastal locations and engineering applications that require knowledge about extreme wave statistics at or near the coastal facilities. In the present study downscaling is applied to both ECMWF and NCEP/NCAR global reanalysis of atmospheric pressure over the Black Sea with 2.5 degrees spatial resolution. A simplified regional atmospheric model is employed for calculation of the surface wind field at 0.5 degrees resolution that serves as forcing for the wave models. Further, a high-resolution nested WAM/SWAN wave model suite of nested wave models is applied for spatial downscaling. It aims at resolving the wave conditions in a limited area at the close proximity to the shore. The pilot site is located in the northern part the Bulgarian Black Sea shore. The system involves the WAM wave model adapted for basin scale simulation at 0.5 degrees spatial resolution. The WAM output for significant wave height, mean wave period and mean angle of wave approach is used in terms of external boundary conditions for the SWAN wave model, which is set up for the western Black Sea shelf at 4km resolution. The same model set up on about 400m resolution is nested to the first SWAN run. In this case the SWAN 2D spectral output provides boundary conditions for the high-resolution model run. The models are implemented for a

  18. Neural Network Based Models for Fusion Applications

    Science.gov (United States)

    Meneghini, Orso; Tema Biwole, Arsene; Luda, Teobaldo; Zywicki, Bailey; Rea, Cristina; Smith, Sterling; Snyder, Phil; Belli, Emily; Staebler, Gary; Canty, Jeff

    2017-10-01

    Whole device modeling, engineering design, experimental planning and control applications demand models that are simultaneously physically accurate and fast. This poster reports on the ongoing effort towards the development and validation of a series of models that leverage neural-­network (NN) multidimensional regression techniques to accelerate some of the most mission critical first principle models for the fusion community, such as: the EPED workflow for prediction of the H-Mode and Super H-Mode pedestal structure the TGLF and NEO models for the prediction of the turbulent and neoclassical particle, energy and momentum fluxes; and the NEO model for the drift-kinetic solution of the bootstrap current. We also applied NNs on DIII-D experimental data for disruption prediction and quantifying the effect of RMPs on the pedestal and ELMs. All of these projects were supported by the infrastructure provided by the OMFIT integrated modeling framework. Work supported by US DOE under DE-SC0012656, DE-FG02-95ER54309, DE-FC02-04ER54698.

  19. Recent developments in volatility modeling and applications

    Directory of Open Access Journals (Sweden)

    A. Thavaneswaran

    2006-01-01

    Full Text Available In financial modeling, it has been constantly pointed out that volatility clustering and conditional nonnormality induced leptokurtosis observed in high frequency data. Financial time series data are not adequately modeled by normal distribution, and empirical evidence on the non-normality assumption is well documented in the financial literature (details are illustrated by Engle (1982 and Bollerslev (1986. An ARMA representation has been used by Thavaneswaran et al., in 2005, to derive the kurtosis of the various class of GARCH models such as power GARCH, non-Gaussian GARCH, nonstationary and random coefficient GARCH. Several empirical studies have shown that mixture distributions are more likely to capture heteroskedasticity observed in high frequency data than normal distribution. In this paper, some results on moment properties are generalized to stationary ARMA process with GARCH errors. Application to volatility forecasts and option pricing are also discussed in some detail.

  20. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  1. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  2. Semi-empirical prediction of moisture build-up in an electronic enclosure using analysis of variance (ANOVA)

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Conseil, Helene; Mohanty, Sankhya

    2016-01-01

    Electronic systems are exposed to harsh environmental conditions such as high humidity in many applications. Moisture transfer into electronic enclosures and condensation can cause several problems as material degradation and corrosion. Therefore, it is important to control the moisture content...... and the relative humidity inside electronic enclosures. In this work, moisture transfer into a typical polycarbonate electronic enclosure with a cylindrical shape opening is studied. The effects of four influential parameters namely, initial relative humidity inside the enclosure, radius and length of the opening...... and temperature are studied. A set of experiments are done based on a fractional factorial design in order to estimate the time constant for moisture transfer into the enclosure by fitting the experimental data to an analytical quasi-steady-state model. According to the statistical analysis, temperature...

  3. Application of Chaos Theory to Psychological Models

    Science.gov (United States)

    Blackerby, Rae Fortunato

    This dissertation shows that an alternative theoretical approach from physics--chaos theory--offers a viable basis for improved understanding of human beings and their behavior. Chaos theory provides achievable frameworks for potential identification, assessment, and adjustment of human behavior patterns. Most current psychological models fail to address the metaphysical conditions inherent in the human system, thus bringing deep errors to psychological practice and empirical research. Freudian, Jungian and behavioristic perspectives are inadequate psychological models because they assume, either implicitly or explicitly, that the human psychological system is a closed, linear system. On the other hand, Adlerian models that require open systems are likely to be empirically tenable. Logically, models will hold only if the model's assumptions hold. The innovative application of chaotic dynamics to psychological behavior is a promising theoretical development because the application asserts that human systems are open, nonlinear and self-organizing. Chaotic dynamics use nonlinear mathematical relationships among factors that influence human systems. This dissertation explores these mathematical relationships in the context of a sample model of moral behavior using simulated data. Mathematical equations with nonlinear feedback loops describe chaotic systems. Feedback loops govern the equations' value in subsequent calculation iterations. For example, changes in moral behavior are affected by an individual's own self-centeredness, family and community influences, and previous moral behavior choices that feed back to influence future choices. When applying these factors to the chaos equations, the model behaves like other chaotic systems. For example, changes in moral behavior fluctuate in regular patterns, as determined by the values of the individual, family and community factors. In some cases, these fluctuations converge to one value; in other cases, they diverge in

  4. Genetic demographic networks: Mathematical model and applications.

    Science.gov (United States)

    Kimmel, Marek; Wojdyła, Tomasz

    2016-10-01

    Recent improvement in the quality of genetic data obtained from extinct human populations and their ancestors encourages searching for answers to basic questions regarding human population history. The most common and successful are model-based approaches, in which genetic data are compared to the data obtained from the assumed demography model. Using such approach, it is possible to either validate or adjust assumed demography. Model fit to data can be obtained based on reverse-time coalescent simulations or forward-time simulations. In this paper we introduce a computational method based on mathematical equation that allows obtaining joint distributions of pairs of individuals under a specified demography model, each of them characterized by a genetic variant at a chosen locus. The two individuals are randomly sampled from either the same or two different populations. The model assumes three types of demographic events (split, merge and migration). Populations evolve according to the time-continuous Moran model with drift and Markov-process mutation. This latter process is described by the Lyapunov-type equation introduced by O'Brien and generalized in our previous works. Application of this equation constitutes an original contribution. In the result section of the paper we present sample applications of our model to both simulated and literature-based demographies. Among other we include a study of the Slavs-Balts-Finns genetic relationship, in which we model split and migrations between the Balts and Slavs. We also include another example that involves the migration rates between farmers and hunters-gatherers, based on modern and ancient DNA samples. This latter process was previously studied using coalescent simulations. Our results are in general agreement with the previous method, which provides validation of our approach. Although our model is not an alternative to simulation methods in the practical sense, it provides an algorithm to compute pairwise

  5. Language Modelling for Collaborative Filtering: Application to Job Applicant Matching

    OpenAIRE

    Schmitt , Thomas; Gonard , François; Caillou , Philippe; Sebag , Michèle

    2017-01-01

    International audience; This paper addresses a collaborative retrieval problem , the recommendation of job ads to applicants. Specifically, two proprietary databases are considered. The first one focuses on the context of unskilled low-paid jobs/applicants; the second one focuses on highly qualified jobs/applicants. Each database includes the job ads and applicant resumes together with the collaborative filtering data recording the applicant clicks on job ads. The proposed approach, called LA...

  6. Hydrodynamic Modeling and Its Application in AUC.

    Science.gov (United States)

    Rocco, Mattia; Byron, Olwyn

    2015-01-01

    The hydrodynamic parameters measured in an AUC experiment, s(20,w) and D(t)(20,w)(0), can be used to gain information on the solution structure of (bio)macromolecules and their assemblies. This entails comparing the measured parameters with those that can be computed from usually "dry" structures by "hydrodynamic modeling." In this chapter, we will first briefly put hydrodynamic modeling in perspective and present the basic physics behind it as implemented in the most commonly used methods. The important "hydration" issue is also touched upon, and the distinction between rigid bodies versus those for which flexibility must be considered in the modeling process is then made. The available hydrodynamic modeling/computation programs, HYDROPRO, BEST, SoMo, AtoB, and Zeno, the latter four all implemented within the US-SOMO suite, are described and their performance evaluated. Finally, some literature examples are presented to illustrate the potential applications of hydrodynamics in the expanding field of multiresolution modeling. © 2015 Elsevier Inc. All rights reserved.

  7. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  8. A conceptual holding model for veterinary applications

    Directory of Open Access Journals (Sweden)

    Nicola Ferrè

    2014-05-01

    Full Text Available Spatial references are required when geographical information systems (GIS are used for the collection, storage and management of data. In the veterinary domain, the spatial component of a holding (of animals is usually defined by coordinates, and no other relevant information needs to be interpreted or used for manipulation of the data in the GIS environment provided. Users trying to integrate or reuse spatial data organised in such a way, frequently face the problem of data incompatibility and inconsistency. The root of the problem lies in differences with respect to syntax as well as variations in the semantic, spatial and temporal representations of the geographic features. To overcome these problems and to facilitate the inter-operability of different GIS, spatial data must be defined according to a “schema” that includes the definition, acquisition, analysis, access, presentation and transfer of such data between different users and systems. We propose an application “schema” of holdings for GIS applications in the veterinary domain according to the European directive framework (directive 2007/2/EC - INSPIRE. The conceptual model put forward has been developed at two specific levels to produce the essential and the abstract model, respectively. The former establishes the conceptual linkage of the system design to the real world, while the latter describes how the system or software works. The result is an application “schema” that formalises and unifies the information-theoretic foundations of how to spatially represent a holding in order to ensure straightforward information-sharing within the veterinary community.

  9. Application of model systems in nanobiotechnology safety

    International Nuclear Information System (INIS)

    Khalilov, R.I.; Aliev, E.Sh.; Khudaverdieva, S.R.

    2010-11-01

    Full text : Last 10-15 years the human civilization, as a result of fast development of biotechnology, cases of new and known illnesses and increase of danger of bioterrorism, collides with new biological dangers. Now, all necessity of actions for biology for prevention of possible dangers admits. Nanobiotechnological researches and offers on application of the scientific results reached in this area prevail of all others. And thus, in many cases or it is at all left outside of attention possible harmful effects of application in an expert of nanoparticles, or it is limited to researches on subcellular level. Adequate results can be received only in case of carrying out of such researches on organism level. Greater prospects in this area have the model systems consisting the culture of unicellular green seaweed, on which now we have been studying the ionizing radiation influence effects. It speaks that on behalf of such cultures we have simultaneously cellular, organism and population levels of the structural organization. Some optimal laboratory methods of maintenance and propagating of this unicellular green seaweed have already been developed. The way offered was a studying at cellular-organism level of the structural organization of effects of action on vital systems of nanoparticles (especially what are offered for application in pharmaceutics) with use of culture of unicellular green seaweed Chlamydomonas reinhardti. Genes of many enzymes of this eucariotic seaweed are established, and also its perspective value in biological synthesis of hydrogen is shown. Studying of negative effects of action of nanoparticles in an example of the object, many molecular features of which are investigated, will allow to establish borders of safety of all biosystems.

  10. Emulsification: modelling, technologies and applications (preface)

    International Nuclear Information System (INIS)

    Sheibat-Othman, Nida; Charton, Sophie

    2014-01-01

    This special issue section offers an overview of the papers presented at the conference 'Emulsification: Modelling, Technologies and Applications' held in Lyon, France from 19 to 21 November 2012. The conference was part of the 'Vingt cinquiemes Entretiens du Centre Jacques Cartier', a series of meetings organised by the Centre Jacques Cartier and chaired by Dr. Alain Bideau. The symposium dealt with the topic of Emulsification, highlighting the common issues shared by different sectors of activity, including the chemical, petrochemical, nuclear, cosmetics, and food industries. Despite the recent significant advances, the research presented in this special issue section highlights the inadequacies of our knowledge of the complex, and often coupled, phenomena involved in the emulsification process. Indeed in order to understand how emulsions are created, it is necessary to determine how the droplet size and size distribution are related to the relevant fields of Physics, and in particular one can identify fluid dynamics and interfacial chemistry as the key disciplines. Furthermore, due to a lack of appropriate and accurate measurements of the important physical properties of emulsions, modelling and numerical simulation appear as essential tools for R and D in emulsion processes and control. This of course implies that physically realistic models are developed and implemented. The knowledge and control of the concentration and drop size distribution (DSD) in a given apparatus are indeed of primary importance for optimal process performances and minimal environmental impact. In order to address some of these needs, the current special issue section focuses on the main chemical engineering aspects of emulsification systems. It gathers papers that treat the generation of emulsions, their implementation and characterisation, as well as the current research studies regarding interfacial chemistry and dynamics, and the basic models of breakage/coalescence, without

  11. Optimizing Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via Taguchi Complemented with ANOVA

    Directory of Open Access Journals (Sweden)

    Tayser Sumer Gaaz

    2016-11-01

    Taguchi and ANOVA approaches. Seemingly, mHNTs has shown its very important role in the resulting product.

  12. Validation of nuclear models used in space radiation shielding applications

    International Nuclear Information System (INIS)

    Norman, Ryan B.; Blattnig, Steve R.

    2013-01-01

    A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.

  13. Surface effects in solid mechanics models, simulations and applications

    CERN Document Server

    Altenbach, Holm

    2013-01-01

    This book reviews current understanding, and future trends, of surface effects in solid mechanics. Covers elasticity, plasticity and viscoelasticity, modeling based on continuum theories and molecular modeling and applications of different modeling approaches.

  14. Development of a Mobile Application for Building Energy Prediction Using Performance Prediction Model

    Directory of Open Access Journals (Sweden)

    Yu-Ri Kim

    2016-03-01

    Full Text Available Recently, the Korean government has enforced disclosure of building energy performance, so that such information can help owners and prospective buyers to make suitable investment plans. Such a building energy performance policy of the government makes it mandatory for the building owners to obtain engineering audits and thereby evaluate the energy performance levels of their buildings. However, to calculate energy performance levels (i.e., asset rating methodology, a qualified expert needs to have access to at least the full project documentation and/or conduct an on-site inspection of the buildings. Energy performance certification costs a lot of time and money. Moreover, the database of certified buildings is still actually quite small. A need, therefore, is increasing for a simplified and user-friendly energy performance prediction tool for non-specialists. Also, a database which allows building owners and users to compare best practices is required. In this regard, the current study developed a simplified performance prediction model through experimental design, energy simulations and ANOVA (analysis of variance. Furthermore, using the new prediction model, a related mobile application was also developed.

  15. Novel applications of the dispersive optical model

    Science.gov (United States)

    Dickhoff, W. H.; Charity, R. J.; Mahzoon, M. H.

    2017-03-01

    A review of recent developments of the dispersive optical model (DOM) is presented. Starting from the original work of Mahaux and Sartor, several necessary steps are developed and illustrated which increase the scope of the DOM allowing its interpretation as generating an experimentally constrained functional form of the nucleon self-energy. The method could therefore be renamed as the dispersive self-energy method. The aforementioned steps include the introduction of simultaneous fits of data for chains of isotopes or isotones allowing a data-driven extrapolation for the prediction of scattering cross sections and level properties in the direction of the respective drip lines. In addition, the energy domain for data was enlarged to include results up to 200 MeV where available. An important application of this work was implemented by employing these DOM potentials to the analysis of the (d, p) transfer reaction using the adiabatic distorted wave approximation. We review these calculations which suggest that physically meaningful results are easier to obtain by employing DOM ingredients as compared to the traditional approach which relies on a phenomenologically-adjusted bound-state wave function combined with a global (nondispersive) optical-model potential. Application to the exotic 132Sn nucleus also shows great promise for the extrapolation of DOM potentials towards the drip line with attendant relevance for the physics of FRIB. We note that the DOM method combines structure and reaction information on the same footing providing a unique approach to the analysis of exotic nuclei. We illustrate the importance of abandoning the custom of representing the non-local Hartree-Fock (HF) potential in the DOM by an energy-dependent local potential as it impedes the proper normalization of the solution of the Dyson equation. This important step allows for the interpretation of the DOM potential as representing the nucleon self-energy permitting the calculations of

  16. Building adaptable and reusable XML applications with model transformations

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas

    2005-01-01

    We present an approach in which the semantics of an XML language is defined by means of a transformation from an XML document model (an XML schema) to an application specific model. The application specific model implements the intended behavior of documents written in the language. A transformation

  17. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  18. Computational nanotechnology modeling and applications with MATLAB

    National Research Council Canada - National Science Library

    Musa, Sarhan M

    2012-01-01

    .... Offering thought-provoking perspective on the developments that are poised to revolutionize the field, the author explores both existing and future nanotechnology applications, which hold great...

  19. Template for Conceptual Model Construction: Model Review and Corps Applications

    National Research Council Canada - National Science Library

    Henderson, Jim E; O'Neil, L. J

    2007-01-01

    .... The template will expedite conceptual model construction by providing users with model parameters and potential model components, building on a study team's knowledge and experience, and promoting...

  20. Acceptance of health information technology in health professionals: an application of the revised technology acceptance model.

    Science.gov (United States)

    Ketikidis, Panayiotis; Dimitrovski, Tomislav; Lazuras, Lambros; Bath, Peter A

    2012-06-01

    The response of health professionals to the use of health information technology (HIT) is an important research topic that can partly explain the success or failure of any HIT application. The present study applied a modified version of the revised technology acceptance model (TAM) to assess the relevant beliefs and acceptance of HIT systems in a sample of health professionals (n = 133). Structured anonymous questionnaires were used and a cross-sectional design was employed. The main outcome measure was the intention to use HIT systems. ANOVA was employed to examine differences in TAM-related variables between nurses and medical doctors, and no significant differences were found. Multiple linear regression analysis was used to assess the predictors of HIT usage intentions. The findings showed that perceived ease of use, but not usefulness, relevance and subjective norms directly predicted HIT usage intentions. The present findings suggest that a modification of the original TAM approach is needed to better understand health professionals' support and endorsement of HIT. Perceived ease of use, relevance of HIT to the medical and nursing professions, as well as social influences, should be tapped by information campaigns aiming to enhance support for HIT in healthcare settings.

  1. Application of multidimensional IRT models to longitudinal data

    NARCIS (Netherlands)

    te Marvelde, J.M.; Glas, Cornelis A.W.; Van Landeghem, Georges; Van Damme, Jan

    2006-01-01

    The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model

  2. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  3. Simple Spreadsheet Thermal Models for Cryogenic Applications

    Science.gov (United States)

    Nash, Alfred

    1995-01-01

    Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.

  4. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  5. Adaptive streaming applications : analysis and implementation models

    NARCIS (Netherlands)

    Zhai, Jiali Teddy

    2015-01-01

    This thesis presents a highly automated design framework, called DaedalusRT, and several novel techniques. As the foundation of the DaedalusRT design framework, two types of dataflow Models-of-Computation (MoC) are used, one as timing analysis model and another one as the implementation model. The

  6. Pinna Model for Hearing Instrument Applications

    DEFF Research Database (Denmark)

    Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren Helstrup; Thaysen, Jesper

    2014-01-01

    A novel model of the pinna (outer ear) is presented. This is to increase the understanding of the effect of the pinna on the on-body radiation pattern of an antenna placed inside the ear. Simulations of the model and of a realistically shaped ear are compared to validate the model. The radiation...

  7. Identifiability of PBPK Models with Applications to ...

    Science.gov (United States)

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  8. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  9. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  10. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  11. Nonlinear dynamics new directions models and applications

    CERN Document Server

    Ugalde, Edgardo

    2015-01-01

    This book, along with its companion volume, Nonlinear Dynamics New Directions: Theoretical Aspects, covers topics ranging from fractal analysis to very specific applications of the theory of dynamical systems to biology. This second volume contains mostly new applications of the theory of dynamical systems to both engineering and biology. The first volume is devoted to fundamental aspects and includes a number of important new contributions as well as some review articles that emphasize new development prospects. The topics addressed in the two volumes include a rigorous treatment of fluctuations in dynamical systems, topics in fractal analysis, studies of the transient dynamics in biological networks, synchronization in lasers, and control of chaotic systems, among others. This book also: ·         Develops applications of nonlinear dynamics on a diversity of topics such as patterns of synchrony in neuronal networks, laser synchronization, control of chaotic systems, and the study of transient dynam...

  12. Application of Actuarial Modelling in Insurance Industry

    OpenAIRE

    Burcã Ana-Maria; Bãtrînca Ghiorghe

    2011-01-01

    In insurance industry, the financial stability of insurance companies represents an issue of vital importance. In order to maintain the financial stability and meet minimum regulatory requirements, actuaries apply actuarial modeling. Modeling has been at the center of actuarial science and of all the sciences from the beginning of their journey. In insurance industry, actuarial modeling creates a framework that allows actuaries to identify, understand, quantify and manage a wide range of risk...

  13. Applications: simple models and difficult theorems

    NARCIS (Netherlands)

    Litvak, Nelli; van de Geer, Sara; Wegkamp, Marten

    2012-01-01

    In this short article I will discuss three papers written by Willem van Zwet with three different co-authors: Mathisca de Gunst, Marta Fiocco, and myself. Each of the papers focuses on one particular application: growth of the number of biological cells [3], spreading of an infection [7], and the

  14. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  15. Optimization of Process Parameters During Drilling of Glass-Fiber Polyester Reinforced Composites Using DOE and ANOVA

    Directory of Open Access Journals (Sweden)

    N.S. Mohan

    2010-09-01

    Full Text Available Polymer-based composite material possesses superior properties such as high strength-to-weight ratio, stiffness-to-weight ratio and good corrosive resistance and therefore, is attractive for high performance applications such as in aerospace, defense and sport goods industries. Drilling is one of the indispensable methods for building products with composite panels. Surface quality and dimensional accuracy play an important role in the performance of a machined component. In machining processes, however, the quality of the component is greatly influenced by the cutting conditions, tool geometry, tool material, machining process, chip formation, work piece material, tool wear and vibration during cutting. Drilling tests were conducted on glass fiber reinforced plastic composite [GFRP] laminates using an instrumented CNC milling center. A series of experiments are conducted using TRIAC VMC CNC machining center to correlate the cutting parameters and material parameters on the cutting thrust, torque and surface roughness. The measured results were collected and analyzed with the help of the commercial software packages MINITAB14 and Taly Profile. The surface roughness of the drilled holes was measured using Rank Taylor Hobson Surtronic 3+ instrument. The method could be useful in predicting thrust, torque and surface roughness parameters as a function of process variables. The main objective is to optimize the process parameters to achieve low cutting thrust, torque and good surface roughness. From the analysis it is evident that among all the significant parameters, speed and drill size have significant influence cutting thrust and drill size and specimen thickness on the torque and surface roughness. It was also found that feed rate does not have significant influence on the characteristic output of the drilling process.

  16. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...

  17. Model Driven Architecture: Foundations and Applications

    NARCIS (Netherlands)

    Rensink, Arend

    The OMG's Model Driven Architecture (MDA) initiative has been the focus of much attention in both academia and industry, due to its promise of more rapid and consistent software development through the increased use of models. In order for MDA to reach its full potential, the ability to manipulate

  18. Super-Hubbard models and applications

    International Nuclear Information System (INIS)

    Drummond, James M.; Feverati, Giovanni; Frappat, Luc; Ragoucy, Eric

    2007-01-01

    We construct XX- and Hubbard-like models based on unitary superalgebras gl(N/M) generalising Shastry's and Maassarani's approach of the algebraic case. We introduce the R-matrix of the gl(N/M) XX model and that of the Hubbard model defined by coupling two independent XX models. In both cases, we show that the R-matrices satisfy the Yang-Baxter equation, we derive the corresponding local Hamiltonian in the transfer matrix formalism and we determine the symmetry of the Hamiltonian. Explicit examples are worked out. In the cases of the gl(1/2) and gl(2/2) Hubbard models, a perturbative calculation at two loops a la Klein and Seitz is performed

  19. Mobile Application Identification based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Yang Xinyan

    2018-01-01

    Full Text Available With the increasing number of mobile applications, there has more challenging network management tasks to resolve. Users also face security issues of the mobile Internet application when enjoying the mobile network resources. Identifying applications that correspond to network traffic can help network operators effectively perform network management. The existing mobile application recognition technology presents new challenges in extensibility and applications with encryption protocols. For the existing mobile application recognition technology, there are two problems, they can not recognize the application which using the encryption protocol and their scalability is poor. In this paper, a mobile application identification method based on Hidden Markov Model(HMM is proposed to extract the defined statistical characteristics from different network flows generated when each application starting. According to the time information of different network flows to get the corresponding time series, and then for each application to be identified separately to establish the corresponding HMM model. Then, we use 10 common applications to test the method proposed in this paper. The test results show that the mobile application recognition method proposed in this paper has a high accuracy and good generalization ability.

  20. Development and application of air quality models at the US ...

    Science.gov (United States)

    Overview of the development and application of air quality models at the U.S. EPA, particularly focused on the development and application of the Community Multiscale Air Quality (CMAQ) model developed within the Computation Exposure Division (CED) of the National Exposure Research Laboratory (NERL). This presentation will provide a simple overview of air quality model development and application geared toward a non-technical student audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  1. Modelling for Bio-,Agro- and Pharma-Applications

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Singh, Ravendra; Cameron, Ian

    2011-01-01

    This chapter considers a range of modelling applications drawn from biological, agrochemical and pharma fields. Microcapsule controlled release of an active ingredient is considered through a time dependent model. Burst-time and lag-time effects are considered and the model adopts a multiscale...... of a milling process within pharmaceutical production as well as a dynamic model representing a fluidised granulation bed for pharma products. The final model considers the tablet pressing process....

  2. Multilevel Models: Conceptual Framework and Applicability

    Directory of Open Access Journals (Sweden)

    Roxana-Otilia-Sonia Hrițcu

    2015-10-01

    Full Text Available Individuals and the social or organizational groups they belong to can be viewed as a hierarchical system situated on different levels. Individuals are situated on the first level of the hierarchy and they are nested together on the higher levels. Individuals interact with the social groups they belong to and are influenced by these groups. Traditional methods that study the relationships between data, like simple regression, do not take into account the hierarchical structure of the data and the effects of a group membership and, hence, results may be invalidated. Unlike standard regression modelling, the multilevel approach takes into account the individuals as well as the groups to which they belong. To take advantage of the multilevel analysis it is important that we recognize the multilevel characteristics of the data. In this article we introduce the outlines of multilevel data and we describe the models that work with such data. We introduce the basic multilevel model, the two-level model: students can be nested into classes, individuals into countries and the general two-level model can be extended very easily to several levels. Multilevel analysis has begun to be extensively used in many research areas. We present the most frequent study areas where multilevel models are used, such as sociological studies, education, psychological research, health studies, demography, epidemiology, biology, environmental studies and entrepreneurship. We support the idea that since hierarchies exist everywhere, multilevel data should be recognized and analyzed properly by using multilevel modelling.

  3. Fuzzy modeling and control theory and applications

    CERN Document Server

    Matía, Fernando; Jiménez, Emilio

    2014-01-01

    Much work on fuzzy control, covering research, development and applications, has been developed in Europe since the 90's. Nevertheless, the existing books in the field are compilations of articles without interconnection or logical structure or they express the personal point of view of the author. This book compiles the developments of researchers with demonstrated experience in the field of fuzzy control following a logic structure and a unified the style. The first chapters of the book are dedicated to the introduction of the main fuzzy logic techniques, where the following chapters focus on concrete applications. This book is supported by the EUSFLAT and CEA-IFAC societies, which include a large number of researchers in the field of fuzzy logic and control. The central topic of the book, Fuzzy Control, is one of the main research and development lines covered by these associations.

  4. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  5. Some Issues of Biological Shape Modelling with Applications

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Hilger, Klaus Baggesen; Skoglund, Karl

    2003-01-01

    This paper illustrates current research at Informatics and Mathematical Modelling at the Technical University of Denmark within biological shape modelling. We illustrate a series of generalizations to, modifications to, and applications of the elements of constructing models of shape or appearance...

  6. Application of capital replacement models with finite planning horizons

    NARCIS (Netherlands)

    Scarf, P.A.; Christer, A.H.

    1997-01-01

    Capital replacement models with finite planning horizons can be used to model replacement policies in complex operational contexts. They may also be used to investigate the cost consequences of technological change. This paper reviews the application of these models in various such contexts. We also

  7. Simple mathematical models of symmetry breaking. Application to particle physics

    International Nuclear Information System (INIS)

    Michel, L.

    1976-01-01

    Some mathematical facts relevant to symmetry breaking are presented. A first mathematical model deals with the smooth action of compact Lie groups on real manifolds, a second model considers linear action of any group on real or complex finite dimensional vector spaces. Application of the mathematical models to particle physics is considered. (B.R.H.)

  8. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  9. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  10. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  11. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  12. Mathematical modeling and applications in nonlinear dynamics

    CERN Document Server

    Merdan, Hüseyin

    2016-01-01

    The book covers nonlinear physical problems and mathematical modeling, including molecular biology, genetics, neurosciences, artificial intelligence with classical problems in mechanics and astronomy and physics. The chapters present nonlinear mathematical modeling in life science and physics through nonlinear differential equations, nonlinear discrete equations and hybrid equations. Such modeling can be effectively applied to the wide spectrum of nonlinear physical problems, including the KAM (Kolmogorov-Arnold-Moser (KAM)) theory, singular differential equations, impulsive dichotomous linear systems, analytical bifurcation trees of periodic motions, and almost or pseudo- almost periodic solutions in nonlinear dynamical systems. Provides methods for mathematical models with switching, thresholds, and impulses, each of particular importance for discontinuous processes Includes qualitative analysis of behaviors on Tumor-Immune Systems and methods of analysis for DNA, neural networks and epidemiology Introduces...

  13. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    Saban Ozer

    results of the Hammerstein model focused on this study. *For correspondence. 597 ..... Example 1: In this sample study, considering the block structure given in ..... Graduate School of Natural and Applied Science, Turkey. [20] Cui M, Liu H, Li Z ...

  14. Models in Science Education: Applications of Models in Learning and Teaching Science

    Science.gov (United States)

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  15. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    such as database, property model library, model parameter regression, and, property-model based product-process design will be presented. The database contains pure component and mixture data for a wide range of organic chemicals. The property models are based on the combined group contribution and atom...... is missing, the atom connectivity based model is employed to predict the missing group interaction. In this way, a wide application range of the property modeling tool is ensured. Based on the property models, targeted computer-aided techniques have been developed for design and analysis of organic chemicals......, polymers, mixtures as well as separation processes. The presentation will highlight the framework (ICAS software) for property modeling, the property models and issues such as prediction accuracy, flexibility, maintenance and updating of the database. Also, application issues related to the use of property...

  16. The Channel Network model and field applications

    International Nuclear Information System (INIS)

    Khademi, B.; Moreno, L.; Neretnieks, I.

    1999-01-01

    The Channel Network model describes the fluid flow and solute transport in fractured media. The model is based on field observations, which indicate that flow and transport take place in a three-dimensional network of connected channels. The channels are generated in the model from observed stochastic distributions and solute transport is modeled taking into account advection and rock interactions, such as matrix diffusion and sorption within the rock. The most important site-specific data for the Channel Network model are the conductance distribution of the channels and the flow-wetted surface. The latter is the surface area of the rock in contact with the flowing water. These parameters may be estimated from hydraulic measurements. For the Aespoe site, several borehole data sets are available, where a packer distance of 3 meters was used. Numerical experiments were performed in order to study the uncertainties in the determination of the flow-wetted surface and conductance distribution. Synthetic data were generated along a borehole and hydraulic tests with different packer distances were simulated. The model has previously been used to study the Long-term Pumping and Tracer Test (LPT2) carried out in the Aespoe Hard Rock Laboratory (HRL) in Sweden, where the distance travelled by the tracers was of the order hundreds of meters. Recently, the model has been used to simulate the tracer tests performed in the TRUE experiment at HRL, with travel distance of the order of tens of meters. Several tracer tests with non-sorbing and sorbing species have been performed

  17. Integrated identification, modeling and control with applications

    Science.gov (United States)

    Shi, Guojun

    This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing

  18. Recognizing textual entailment models and applications

    CERN Document Server

    Dagan, Ido; Sammons, Mark

    2013-01-01

    In the last few years, a number of NLP researchers have developed and participated in the task of Recognizing Textual Entailment (RTE). This task encapsulates Natural Language Understanding capabilities within a very simple interface: recognizing when the meaning of a text snippet is contained in the meaning of a second piece of text. This simple abstraction of an exceedingly complex problem has broad appeal partly because it can be conceived also as a component in other NLP applications, from Machine Translation to Semantic Search to Information Extraction. It also avoids commitment to any sp

  19. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  20. Linear accelerator modeling: development and application

    International Nuclear Information System (INIS)

    Jameson, R.A.; Jule, W.D.

    1977-01-01

    Most of the parameters of a modern linear accelerator can be selected by simulating the desired machine characteristics in a computer code and observing how the parameters affect the beam dynamics. The code PARMILA is used at LAMPF for the low-energy portion of linacs. Collections of particles can be traced with a free choice of input distributions in six-dimensional phase space. Random errors are often included in order to study the tolerances which should be imposed during manufacture or in operation. An outline is given of the modifications made to the model, the results of experiments which indicate the validity of the model, and the use of the model to optimize the longitudinal tuning of the Alvarez linac

  1. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  2. Simulation models generator. Applications in scheduling

    Directory of Open Access Journals (Sweden)

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  3. Optical Coherence Tomography: Modeling and Applications

    DEFF Research Database (Denmark)

    Thrane, Lars

    this system. A demonstration of the imaging capabilities of the OCT system is given. Moreover, a novel truereflection OCT imaging algorithm, based on the new OCT model presented in this thesis, is demonstrated. Finally, a theoretical analysis of the Wigner phase-space distribution function for the OCT...... geometry, i.e., reflection geometry, is developed. As in the new OCT model, multiple scattered photons has been taken into account together with multiple scattering effects. As an important result, a novel method of creating images based on measurements of the momentum width of the Wigner phase...

  4. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  5. Tire Models for Use in Braking Applications

    OpenAIRE

    Svendenius, Jacob

    2003-01-01

    The tire is a significant part for control of a vehicle. For a well-working brake system the contact properties between the tire and the ground is the limiting factor for a safe braking. To get optimal performance it is important that the system can utilize all friction resources. The brush tire model was a popular method in the 1960's and 1970's before the empirical approaches became dominating. The brush model gives an educational interpretation of the physics behind the tire behavi...

  6. A cutting force model for micromilling applications

    DEFF Research Database (Denmark)

    Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2006-01-01

    In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius.......In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....

  7. Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model

    Science.gov (United States)

    Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim

    2017-01-01

    We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…

  8. Bacteriophages: update on application as models for viruses in water

    African Journals Online (AJOL)

    Bacteriophages: update on application as models for viruses in water. ... the resistance of human viruses to water treatment and disinfection processes. ... highly sensitive molecular techniques viruses have been detected in drinking water ...

  9. Application of the Technology Acceptance Model (TAM) in electronic ...

    African Journals Online (AJOL)

    Application of the Technology Acceptance Model (TAM) in electronic ticket purchase for ... current study examined the perceived usefulness and ease of use of online technology ... The findings are discussed in the light of these perspectives.

  10. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    This study proposes an application of two techniques of artificial intelligence (AI) ... (2006) applied rainfall–runoff modeling using ANN ... in artificial intelligence, engineering and science .... usually be estimated from a sample of observations.

  11. Application of the rainfall infiltration breakthrough (RIB) model for ...

    African Journals Online (AJOL)

    2012-05-23

    May 23, 2012 ... In this paper, the physical meaning of parameters in the CRD and previous ... ity; the utility of the RIB model for application in different climatic areas under ...... TMG Aquifer feasibility study and pilot project ecological and.

  12. An ontology model for execution records of Grid scientific applications

    NARCIS (Netherlands)

    Baliś, B.; Bubak, M.

    2008-01-01

    Records of past application executions are particularly important in the case of loosely-coupled, workflow driven scientific applications which are used to conduct in silico experiments, often on top of Grid infrastructures. In this paper, we propose an ontology-based model for storing and querying

  13. Turbulence models development and engineering applications

    International Nuclear Information System (INIS)

    Groetzbach, G.; Ammann, T.; Dorr, B.; Hiltner, I.; Hofmann, S.; Kampczyk, M.; Kimhi, Y.; Seiter, C.; Woerner, M.; Alef, M.; Hennemuth, A.

    1995-01-01

    The FLUTAN code is used for analyzing the decay heat removal in new reactor concepts. The turbulence models applied in FLUTAN are improved by the development of the TURBIT code. TURBIT serves for a numerical simulation of turbulent channel flow. (orig.)

  14. Part 7: Application of the IAWQ model

    African Journals Online (AJOL)

    drinie

    Model predictions and observed data in respect of polyphosphate (polyP) and suspended solids are also compared and ... that investigation with the aim of evaluating the predictive power .... comparing it to the behaviour of the test unit with metal salt ... Based on the measured RBCOD during these periods and subtracting.

  15. A universal throw model and its applications

    NARCIS (Netherlands)

    Voort, M.M. van der; Doormaal, J.C.A.M. van; Verolme, E.K.; Weerheijm, J.

    2008-01-01

    A deterministic model has been developed that describes the throw of debris or fragments from a source with an arbitrary geometry and for arbitrary initial conditions. The initial conditions are defined by the distributions of mass, launch velocity and launch direction. The item density in an

  16. Integrated Safety Culture Model and Application

    Institute of Scientific and Technical Information of China (English)

    汪磊; 孙瑞山; 刘汉辉

    2009-01-01

    A new safety culture model is constructed and is applied to analyze the correlations between safety culture and SMS. On the basis of previous typical definitions, models and theories of safety culture, an in-depth analysis on safety culture's structure, composing elements and their correlations was conducted. A new definition of safety culture was proposed from the perspective of sub-cuhure. 7 types of safety sub-culture, which are safety priority culture, standardizing culture, flexible culture, learning culture, teamwork culture, reporting culture and justice culture were defined later. Then integrated safety culture model (ISCM) was put forward based on the definition. The model divided safety culture into intrinsic latency level and extrinsic indication level and explained the potential relationship between safety sub-culture and all safety culture dimensions. Finally in the analyzing of safety culture and SMS, it concluded that positive safety culture is the basis of im-plementing SMS effectively and an advanced SMS will improve safety culture from all around.

  17. A marketing model: applications for dietetic professionals.

    Science.gov (United States)

    Parks, S C; Moody, D L

    1986-01-01

    Traditionally, dietitians have communicated the availability of their services to the "public at large." The expectation was that the public would respond favorably to nutrition programs simply because there was a consumer need for them. Recently, however, both societal and consumer needs have changed dramatically, making old communication strategies ineffective and obsolete. The marketing discipline has provided a new model and new decision-making tools for many health professionals to use to more effectively make their services known to multiple consumer groups. This article provides one such model as applied to the dietetic profession. The model explores a definition of the business of dietetics, how to conduct an analysis of the environment, and, finally, the use of both in the choice of new target markets. Further, the model discusses the major components of developing a marketing strategy that will help the practitioner to be competitive in the marketplace. Presented are strategies for defining and re-evaluating the mission of the profession, for using future trends to identify new markets and roles for the profession, and for developing services that make the profession more competitive by better meeting the needs of the consumer.

  18. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  19. Modelling and Generating Ajax Applications : A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  20. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  1. Adaptive Networks Theory, Models and Applications

    CERN Document Server

    Gross, Thilo

    2009-01-01

    With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.

  2. Phenomenological BRDF modeling for engineering applications

    Science.gov (United States)

    Jafolla, James C.; Stokes, Jeffrey A.; Sullivan, Robert J.

    1997-09-01

    The application of analytical light scattering techniques for virtual prototyping the optical performance of paint coatings provides an effective tool for optimizing paint design for specific optical requirements. This paper describes the phenomenological basis for the scattering coatings computer aided design (ScatCad) code. The ScatCad code predicts the bidirectional reflectance distribution function (BRDF) and the hemispherical directional reflectance (HDR) of pigmented paint coatings for the purpose of coating design optimization. The code uses techniques for computing the pigment single scattering phase function, multiple scattering radiative transfer, and rough surface scattering to calculate the BRDF and HDR based on the fundamental optical properties of the pigment(s) and binder, pigment number density and size distribution, and surface roughness of the binder-interface and substrate. This is a significant enhancement to the two- flux, Kubelka-Munk analysis that has traditionally been used in the coatings industry. Example calculations and comparison with measurements are also presented.

  3. Application of Prognostic Mesoscale Modeling in the Southeast United States

    International Nuclear Information System (INIS)

    Buckley, R.L.

    1999-01-01

    A prognostic model is being used to provide regional forecasts for a variety of applications at the Savannah River Site (SRS). Emergency response dispersion models available at SRS use the space and time-dependent meteorological data provided by this model to supplement local and regional observations. Output from the model is also used locally to aid in forecasting at SRS, and regionally in providing forecasts of the potential time and location of hurricane landfall within the southeast United States

  4. Desublimation process: verification and applications of a theoretical model

    International Nuclear Information System (INIS)

    Eby, R.S.

    1979-01-01

    A theoretical model simulating the simultaneous heat and mass transfer which takes place during the desublimation of a gas to a solid is presented. Desublimer column loading profiles to experimentally verify the model were obtained using a gamma scintillation technique. The data indicate that, if the physical parameters of the desublimed frost material are known, the model can accurately predict the desublimation phenomenon. The usefulness of the model in different engineering applications is also addressed

  5. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  6. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  7. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  8. Atmospheric dispersion models for environmental pollution applications

    International Nuclear Information System (INIS)

    Gifford, F.A.

    1976-01-01

    Pollutants are introduced into the air by many of man's activities. The potentially harmful effects these can cause are, broadly speaking, of two kinds: long-term, possibly large-scale and wide-spread chronic effects, including long-term effects on the earth's climate; and acute, short-term effects such as those associated with urban air pollution. This section is concerned with mathematical cloud or plume models describing the role of the atmosphere, primarily in relation to the second of these, the acute effects of air pollution, i.e., those arising from comparatively high concentration levels. The need for such air pollution modeling studies has increased spectacularly as a result of the National Environmental Policy Act of 1968 and, especially, two key court decisions; the Calvert Cliffs decision, and the Sierra Club ruling on environmental non-degradation

  9. Application of Digital Terrain Model to volcanology

    Directory of Open Access Journals (Sweden)

    V. Achilli

    2006-06-01

    Full Text Available Three-dimensional reconstruction of the ground surface (Digital Terrain Model, DTM, derived by airborne GPS photogrammetric surveys, is a powerful tool for implementing morphological analysis in remote areas. High accurate 3D models, with submeter elevation accuracy, can be obtained by images acquired at photo scales between 1:5000-1:20000. Multitemporal DTMs acquired periodically over volcanic area allow the monitoring of areas interested by crustal deformations and the evaluation of mass balance when large instability phenomena or lava flows have occurred. The work described the results obtained from the analysis of photogrammetric data collected over the Vulcano Island from 1971 to 2001. The data, processed by means of the Digital Photogrammetry Workstation DPW 770, provided DTM with accuracy ranging between few centimeters to few decimeters depending on the geometric image resolution, terrain configuration and quality of photographs.

  10. Application of pyrolysis models in COCOSYS

    International Nuclear Information System (INIS)

    Klein-Hessling, W.; Roewekamp, M.; Allelein, H.J.

    2001-01-01

    For the assessment of the efficiency of severe accident management measures the simulation of severe accident development, progression and potential consequences in containments of nuclear power plants is required under conditions as realistic as possible. Therefore, the containment code item (COCOSYS) has been developed by GRS. The main objective is to provide a code system on the basis of mechanistic models for the comprehensive simulation of all relevant processes and plant states during severe accidents in the containment of light water reactors also covering the design basis accidents. In this context the simulation of oil and cable fires is of high priority. These processes strongly depend on the thermal hydraulic boundary conditions. An input-definition of the pyrolysis rate by the user is not consistent with the philosophy of COCOSYS. Therefore, a first attempt has been made for the code internal simulation of the pyrolysis rate and the following combustion process for oil and cable fires. The oil fire model used has been tested against the HDR E41.7 experiment. Because the cable fire model is still under development, a so-called 'simplified cable burning' model has been implemented in COCOSYS and tested against the HDR E42 cable fire experiments. Furthermore, in the frame of the bilateral (between German and Ukrainian government) project INT9131 in the field of fire safety at nuclear power plants (NPP), an exemplary fire hazard analysis (FHA) has been carried out for the cable spreading rooms below the unit control room of a VVER-1000/W-320 type reference plant. (authors)

  11. Applicability Of Resources Optimization Model For Mitigating

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    previous work. The entire model can be summarized as algorithm below. F u ll L en g th. Research. Article. 1 .... performance metric used is the total sum of utilities of all the peers in the system at .... Hua, J. S., Huang, D. C. Yen, S. M. and Chena, C. W. (2012) “A dynamic ... Workshop on Quality of Service: 174-192. Yahaya ...

  12. Applications of Molecular and Materials Modeling

    Science.gov (United States)

    2002-01-01

    Chimica Industriale Molecular modeling of solvation Prof. Jacopo Tomasi http://www.dcci.unipi.it/attivita /attivita.html; http://www.dcci.unipi.it...solutions/ cases/notes/scale.html BNFL Sorption of gases in zeolites Dr. Scott L. Owens http://www.bnfl.co.uk/ BAE (British Aerospace Engineering) Rare...permeation of gases ; adhesion and interfacial interactions of siloxane networks; chemical reactivity and catalysis; environmental and cosmetics

  13. Robot modelling; Control and applications with software

    Energy Technology Data Exchange (ETDEWEB)

    Ranky, P G; Ho, C Y

    1985-01-01

    This book provides a ''picture'' of robotics covering both the theoretical aspect of modeling as well as the practical and design aspects of: robot programming; robot tooling and automated hand changing; implementation planning; testing; and software design for robot systems. The authors present an introduction to robotics with a systems approach. They describe not only the tasks relating to a single robot (or arm) but also systems of robots working together on a product or several products.

  14. Development and application of earth system models.

    Science.gov (United States)

    Prinn, Ronald G

    2013-02-26

    The global environment is a complex and dynamic system. Earth system modeling is needed to help understand changes in interacting subsystems, elucidate the influence of human activities, and explore possible future changes. Integrated assessment of environment and human development is arguably the most difficult and most important "systems" problem faced. To illustrate this approach, we present results from the integrated global system model (IGSM), which consists of coupled submodels addressing economic development, atmospheric chemistry, climate dynamics, and ecosystem processes. An uncertainty analysis implies that without mitigation policies, the global average surface temperature may rise between 3.5 °C and 7.4 °C from 1981-2000 to 2091-2100 (90% confidence limits). Polar temperatures, absent policy, are projected to rise from about 6.4 °C to 14 °C (90% confidence limits). Similar analysis of four increasingly stringent climate mitigation policy cases involving stabilization of greenhouse gases at various levels indicates that the greatest effect of these policies is to lower the probability of extreme changes. The IGSM is also used to elucidate potential unintended environmental consequences of renewable energy at large scales. There are significant reasons for attention to climate adaptation in addition to climate mitigation that earth system models can help inform. These models can also be applied to evaluate whether "climate engineering" is a viable option or a dangerous diversion. We must prepare young people to address this issue: The problem of preserving a habitable planet will engage present and future generations. Scientists must improve communication if research is to inform the public and policy makers better.

  15. Voronoi cell patterns: Theoretical model and applications

    Science.gov (United States)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  16. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  17. Modeling protein structures: construction and their applications.

    Science.gov (United States)

    Ring, C S; Cohen, F E

    1993-06-01

    Although no general solution to the protein folding problem exists, the three-dimensional structures of proteins are being successfully predicted when experimentally derived constraints are used in conjunction with heuristic methods. In the case of interleukin-4, mutagenesis data and CD spectroscopy were instrumental in the accurate assignment of secondary structure. In addition, the tertiary structure was highly constrained by six cysteines separated by many residues that formed three disulfide bridges. Although the correct structure was a member of a short list of plausible structures, the "best" structure was the topological enantiomer of the experimentally determined conformation. For many proteases, other experimentally derived structures can be used as templates to identify the secondary structure elements. In a procedure called modeling by homology, the structure of a known protein is used as a scaffold to predict the structure of another related protein. This method has been used to model a serine and a cysteine protease that are important in the schistosome and malarial life cycles, respectively. The model structures were then used to identify putative small molecule enzyme inhibitors computationally. Experiments confirm that some of these nonpeptidic compounds are active at concentrations of less than 10 microM.

  18. Ocean modelling aspects for drift applications

    Science.gov (United States)

    Stephane, L.; Pierre, D.

    2010-12-01

    Nowadays, many authorities in charge of rescue-at-sea operations lean on operational oceanography products to outline research perimeters. Moreover, current fields estimated with sophisticated ocean forecasting systems can be used as input data for oil spill/ adrift object fate models. This emphasises the necessity of an accurate sea state forecast, with a mastered level of reliability. This work focuses on several problems inherent to drift modeling, dealing in the first place with the efficiency of the oceanic current field representation. As we want to discriminate the relevance of a particular physical process or modeling option, the idea is to generate series of current fields of different characteristics and then qualify them in term of drift prediction efficiency. Benchmarked drift scenarios were set up from real surface drifters data, collected in the Mediterranean sea and off the coasts of Angola. The time and space scales that we are interested in are about 72 hr forecasts (typical timescale communicated in case of crisis), for distance errors that we hope about a few dozen of km around the forecast (acceptable for reconnaissance by aircrafts) For the ocean prediction, we used some regional oceanic configurations based on the NEMO 2.3 code, nested into Mercator 1/12° operational system. Drift forecasts were computed offline with Mothy (Météo France oil spill modeling system) and Ariane (B. Blanke, 1997), a Lagrangian diagnostic tool. We were particularly interested in the importance of the horizontal resolution, vertical mixing schemes, and any processes that may impact the surface layer. The aim of the study is to ultimately point at the most suitable set of parameters for drift forecast use inside operational oceanic systems. We are also motivated in assessing the relevancy of ensemble forecasts regarding determinist predictions. Several tests showed that mis-described observed trajectories can finally be modelled statistically by using uncertainties

  19. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  20. Initiating Events Modeling for On-Line Risk Monitoring Application

    International Nuclear Information System (INIS)

    Simic, Z.; Mikulicic, V.

    1998-01-01

    In order to make on-line risk monitoring application of Probabilistic Risk Assessment more complete and realistic, a special attention need to be dedicated to initiating events modeling. Two different issues are of special importance: one is how to model initiating events frequency according to current plant configuration (equipment alignment and out of service status) and operating condition (weather and various activities), and the second is how to preserve dependencies between initiating events model and rest of PRA model. First, the paper will discuss how initiating events can be treated in on-line risk monitoring application. Second, practical example of initiating events modeling in EPRI's Equipment Out of Service on-line monitoring tool will be presented. Gains from application and possible improvements will be discussed in conclusion. (author)

  1. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  2. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-05-12

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  3. Model-Driven Approach for Body Area Network Application Development

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  4. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  5. Dimensions for hearing-impaired mobile application usability model

    Science.gov (United States)

    Nathan, Shelena Soosay; Hussain, Azham; Hashim, Nor Laily; Omar, Mohd Adan

    2017-10-01

    This paper discuss on the dimensions that has been derived for the hearing-impaired mobile applications usability model. General usability model consist of general dimension for evaluating mobile application however requirements for the hearing-impaired are overlooked and often scanted. This led towards mobile application developed for the hearing-impaired are left unused. It is also apparent that these usability models do not consider accessibility dimensions according to the requirement of the special users. This complicates the work of usability practitioners as well as academician that practices research usability when application are developed for the specific user needs. To overcome this issue, dimension chosen for the hearing-impaired are ensured to be align with the real need of the hearing-impaired mobile application. Besides literature studies, requirements for the hearing-impaired mobile application have been identified through interview conducted with hearing-impaired mobile application users that were recorded as video outputs and analyzed using Nvivo. Finally total of 6 out of 15 dimensions gathered are chosen for the proposed model and presented.

  6. Development and application of degradation modeling to define maintenance practices

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1994-06-01

    This report presents the development and application of component degradation modeling to analyze degradation effects on reliability and to identify aspects of maintenance practices that mitigate degradation and aging effects. Using continuous time Markov approaches, a component degradation model is discussed that includes information about degradation and maintenance. The component model commonly used in probabilistic risk assessments is a simple case of this general model. The parameters used in the general model have engineering interpretations and can be estimated using data and engineering experience. The generation of equations for specific models, the solution of these equations, and a methodology for estimating the needed parameters are all discussed. Applications in this report show how these models can be used to quantitatively assess the benefits that are expected from maintaining a component, the effects of different maintenance efficiencies, the merits of different maintenance policies, and the interaction of surveillance test intervals with maintenance practices

  7. Molecular modeling and multiscaling issues for electronic material applications

    CERN Document Server

    Iwamoto, Nancy; Yuen, Matthew; Fan, Haibo

    Volume 1 : Molecular Modeling and Multiscaling Issues for Electronic Material Applications provides a snapshot on the progression of molecular modeling in the electronics industry and how molecular modeling is currently being used to understand material performance to solve relevant issues in this field. This book is intended to introduce the reader to the evolving role of molecular modeling, especially seen through the eyes of the IEEE community involved in material modeling for electronic applications.  Part I presents  the role that quantum mechanics can play in performance prediction, such as properties dependent upon electronic structure, but also shows examples how molecular models may be used in performance diagnostics, especially when chemistry is part of the performance issue.  Part II gives examples of large-scale atomistic methods in material failure and shows several examples of transitioning between grain boundary simulations (on the atomistic level)and large-scale models including an example ...

  8. Humanized mouse models: Application to human diseases.

    Science.gov (United States)

    Ito, Ryoji; Takahashi, Takeshi; Ito, Mamoru

    2018-05-01

    Humanized mice are superior to rodents for preclinical evaluation of the efficacy and safety of drug candidates using human cells or tissues. During the past decade, humanized mouse technology has been greatly advanced by the establishment of novel platforms of genetically modified immunodeficient mice. Several human diseases can be recapitulated using humanized mice due to the improved engraftment and differentiation capacity of human cells or tissues. In this review, we discuss current advanced humanized mouse models that recapitulate human diseases including cancer, allergy, and graft-versus-host disease. © 2017 Wiley Periodicals, Inc.

  9. Application of evolutionary games to modeling carcinogenesis.

    Science.gov (United States)

    Swierniak, Andrzej; Krzeslak, Michal

    2013-06-01

    We review a quite large volume of literature concerning mathematical modelling of processes related to carcinogenesis and the growth of cancer cell populations based on the theory of evolutionary games. This review, although partly idiosyncratic, covers such major areas of cancer-related phenomena as production of cytotoxins, avoidance of apoptosis, production of growth factors, motility and invasion, and intra- and extracellular signaling. We discuss the results of other authors and append to them some additional results of our own simulations dealing with the possible dynamics and/or spatial distribution of the processes discussed.

  10. Watershed modeling applications in south Texas

    Science.gov (United States)

    Pedraza, Diana E.; Ockerman, Darwin J.

    2012-01-01

    Watershed models can be used to simulate natural and human-altered processes including the flow of water and associated transport of sediment, chemicals, nutrients, and microbial organisms within a watershed. Simulation of these processes is useful for addressing a wide range of water-resource challenges, such as quantifying changes in water availability over time, understanding the effects of development and land-use changes on water resources, quantifying changes in constituent loads and yields over time, and quantifying aquifer recharge temporally and spatially throughout a watershed.

  11. Management Model Applicable to Metallic Materials Industry

    Directory of Open Access Journals (Sweden)

    Adrian Ioana

    2013-02-01

    Full Text Available This paper presents an algorithmic analysis of the marketing mix in metallurgy. It also analyzes the main correlations and their optimizing possibilities through an efficient management. Thus, both the effect and the importance of the marketing mix, for components (the four “P-s” areanalyzed in the materials’ industry, but their correlations as well, with the goal to optimize the specific management. There are briefly presented the main correlations between the 4 marketing mix components (the 4 “P-s” for a product within the materials’ industry, including aspects regarding specific management.Keywords: Management Model, Materials Industry, Marketing Mix, Correlations.

  12. Nuclear physics for applications. A model approach

    International Nuclear Information System (INIS)

    Prussin, S.G.

    2007-01-01

    Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)

  13. Applications of Historical Analyses in Combat Modelling

    Science.gov (United States)

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  14. Hydromechanical modelling with application in sealing for underground waste deposition

    Energy Technology Data Exchange (ETDEWEB)

    Hasal, Martin, E-mail: martin.hasal@vsb.cz; Michalec, Zdeněk; Blaheta, Radim [Institute of Geonics AS CR, Studentska 1768, 70800 Ostrava-Poruba (Czech Republic)

    2015-03-10

    Hydro-mechanical models appear in simulation of many environmental problems related to construction of engineering barriers for contaminant spreading. The presented work aims in modelling bentonite-sand barriers, which can be used for nuclear waste isolation and similar problems. Particularly, we use hydro-mechanical model coupling unsaturated flow and (nonlinear) elasticity, implement such model in COMSOL software and show application in simulation of an infiltration test (2D axisymmetric model) and the SEALEX Water test WT1 experiment (3D model). Finally, we discuss the needs and possibilities of parallel high performance computing.

  15. [Watershed water environment pollution models and their applications: a review].

    Science.gov (United States)

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  16. NUMERICAL MODEL APPLICATION IN ROWING SIMULATOR DESIGN

    Directory of Open Access Journals (Sweden)

    Petr Chmátal

    2016-04-01

    Full Text Available The aim of the research was to carry out a hydraulic design of rowing/sculling and paddling simulator. Nowadays there are two main approaches in the simulator design. The first one includes a static water with no artificial movement and counts on specially cut oars to provide the same resistance in the water. The second approach, on the other hand uses pumps or similar devices to force the water to circulate but both of the designs share many problems. Such problems are affecting already built facilities and can be summarized as unrealistic feeling, unwanted turbulent flow and bad velocity profile. Therefore, the goal was to design a new rowing simulator that would provide nature-like conditions for the racers and provide an unmatched experience. In order to accomplish this challenge, it was decided to use in-depth numerical modeling to solve the hydraulic problems. The general measures for the design were taken in accordance with space availability of the simulator ́s housing. The entire research was coordinated with other stages of the construction using BIM. The detailed geometry was designed using a numerical model in Ansys Fluent and parametric auto-optimization tools which led to minimum negative hydraulic phenomena and decreased investment and operational costs due to the decreased hydraulic losses in the system.

  17. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  18. Application of the RADTRAN 5 stop model

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Kanipe, R.L.; Weiner, R.F.

    1997-01-01

    A number of environmental impact analyses with the RADTRAN computer code have shown that dose to persons at stops is one of the largest components of incident-free dose during overland carriage of spent fuel and other radioactive materials (e.g., USDOE, 1994). The input data used in these analyses were taken from a 1983 study that reports actual observations of spent fuel shipments by truck. Early RADTRAN stop models, however, were insufficiently flexible to take advantage of the detailed information in the study. A more recent study of gasoline service stations that specialize in servicing large trucks, which are the most likely stop locations for shipments of Type B packages in the United States, has provided additional, detailed data on refueling/meal stops. The RADTRAN 5 computer code for transportation risk analysis allows exposures at stops to be more fully modeled than have previous releases of the code and is able to take advantage of detailed data. It is the intent of this paper first to compare results from RADTRAN and RADTRAN 5 for the old, low-resolution form of input data, and then to demonstrate what effect the new data and input format have on stop-dose estimates for an individual stop and for a hypothetical shipment route. Finally, these estimated public doses will be contrasted with doses calculated for a special population group -- inspectors

  19. Application of the radtran 5 stop model

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Kanipe, R.L.; Weiner, R.F.

    1998-01-01

    A number of environmental impact analyzes with the RADTRAN computer code have shown that dose to persons at stops is one of the largest components of incident-free dose during overland carriage of spent fuel and other radioactive materials. The input data used in these analyses were taken from a 1983 study that reports actual observations of spent fuel shipments by truck. Early RADTRAN stop models, however, were insufficiently flexible to take advantage of the detailed information in the study. A more recent study of gasoline service stations that specialize in servicing large trucks, which are the most likely stop locations for shipments of Type B packages in the United States, has provided additional, detailed data on refueling/meal stops. The RADTRAN 5 computer code for transportation risk analysis allows exposures at stops to be more fully modelled than have previous releases of the code and is able to take advantage of detailed data. It is the intent of this paper first to compare results from RADTRAN 4 and RADTRAN 5 for the old, low-resolution form of input data, and then to demonstrate what effect the new data and input format have on stop-dose estimates for an individual stop and for a hypothetical shipment route. Finally, these estimated public doses will be contrasted with doses calculated for a special population group-inspectors. (authors)

  20. Radiation repair models for clinical application.

    Science.gov (United States)

    Dale, Roger G

    2018-02-28

    A number of newly emerging clinical techniques involve non-conventional patterns of radiation delivery which require an appreciation of the role played by radiation repair phenomena. This review outlines the main models of radiation repair, focussing on those which are of greatest clinical usefulness and which may be incorporated into biologically effective dose assessments. The need to account for the apparent "slowing-down" of repair rates observed in some normal tissues is also examined, along with a comparison of the relative merits of the formulations which can be used to account for such phenomena. Jack Fowler brought valuable insight to the understanding of radiation repair processes and this article includes reference to his important contributions in this area.

  1. Application of Generic Disposal System Models

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.

  2. Applications of species distribution modeling to paleobiology

    DEFF Research Database (Denmark)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine Ann

    2011-01-01

    -Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing insights into biogeographic dynamics >400 million years ago). We discuss important assumptions and uncertainties that affect the SDM approach to paleobiology......Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i......) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview...

  3. A review of thermoelectric cooling: Materials, modeling and applications

    International Nuclear Information System (INIS)

    Zhao, Dongliang; Tan, Gang

    2014-01-01

    This study reviews the recent advances of thermoelectric materials, modeling approaches, and applications. Thermoelectric cooling systems have advantages over conventional cooling devices, including compact in size, light in weight, high reliability, no mechanical moving parts, no working fluid, being powered by direct current, and easily switching between cooling and heating modes. In this study, historical development of thermoelectric cooling has been briefly introduced first. Next, the development of thermoelectric materials has been given and the achievements in past decade have been summarized. To improve thermoelectric cooling system's performance, the modeling techniques have been described for both the thermoelement modeling and thermoelectric cooler (TEC) modeling including standard simplified energy equilibrium model, one-dimensional and three-dimensional models, and numerical compact model. Finally, the thermoelectric cooling applications have been reviewed in aspects of domestic refrigeration, electronic cooling, scientific application, and automobile air conditioning and seat temperature control, with summaries for the commercially available thermoelectric modules and thermoelectric refrigerators. It is expected that this study will be beneficial to thermoelectric cooling system design, simulation, and analysis. - Highlights: •Thermoelectric cooling has great prospects with thermoelectric material's advances. •Modeling techniques for both thermoelement and TEC have been reviewed. •Principle thermoelectric cooling applications have been reviewed and summarized

  4. Models of the Organizational Life Cycle: Applications to Higher Education.

    Science.gov (United States)

    Cameron, Kim S.; Whetten, David A.

    1983-01-01

    A review of models of group and organization life cycle development is provided and the applicability of those models for institutions of higher education are discussed. An understanding of the problems and characteristics present in different life cycle stages can help institutions manage transitions more effectively. (Author/MLW)

  5. IVIM: modeling, experimental validation and application to animal models

    International Nuclear Information System (INIS)

    Fournet, Gabrielle

    2016-01-01

    This PhD thesis is centered on the study of the IVIM ('Intravoxel Incoherent Motion') MRI sequence. This sequence allows for the study of the blood microvasculature such as the capillaries, arterioles and venules. To be sensitive only to moving groups of spins, diffusion gradients are added before and after the 180 degrees pulse of a spin echo (SE) sequence. The signal component corresponding to spins diffusing in the tissue can be separated from the one related to spins travelling in the blood vessels which is called the IVIM signal. These two components are weighted by f IVIM which represents the volume fraction of blood inside the tissue. The IVIM signal is usually modelled by a mono-exponential (ME) function and characterized by a pseudo-diffusion coefficient, D*. We propose instead a bi-exponential IVIM model consisting of a slow pool, characterized by F slow and D* slow corresponding to the capillaries as in the ME model, and a fast pool, characterized by F fast and D* fast, related to larger vessels such as medium-size arterioles and venules. This model was validated experimentally and more information was retrieved by comparing the experimental signals to a dictionary of simulated IVIM signals. The influence of the pulse sequence, the repetition time and the diffusion encoding time was also studied. Finally, the IVIM sequence was applied to the study of an animal model of Alzheimer's disease. (author) [fr

  6. Application of SPSS in ANOVA of biological statistics%SPSS方差分析在生物统计的应用

    Institute of Scientific and Technical Information of China (English)

    高忠江; 施树良; 李钰

    2008-01-01

    方差分析是生物统计中常采用的一种方法.如何使用统计分析软件进行方差分析来实现对研究结果的快速和科学的处理,获得正确的结论,是生物学研究中重要的一环.本文通过实例介绍了如何使用SPSS(Statistical Package for the Social Science or Statistic Products and Service Solution)数据分析工具进行方差分析的方法;实现了数据分析和处理的快捷、准确和直观;与Excel相比,SPSS的统计分析功能更为强大,既有利于提高数据处理效率,又降低了实验成本.

  7. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  8. Solutions manual to accompany finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    A solutions manual to accompany Finite Mathematics: Models and Applications In order to emphasize the main concepts of each chapter, Finite Mathematics: Models and Applications features plentiful pedagogical elements throughout such as special exercises, end notes, hints, select solutions, biographies of key mathematicians, boxed key principles, a glossary of important terms and topics, and an overview of use of technology. The book encourages the modeling of linear programs and their solutions and uses common computer software programs such as LINDO. In addition to extensive chapters on pr

  9. Computer-aided modelling template: Concept and application

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2015-01-01

    decomposition technique which identifies generic steps and workflow involved, the computer-aided template concept has been developed. This concept is implemented as a software tool, which provides a user-friendly interface for following the workflow steps and guidance through the steps providing additional......Modelling is an important enabling technology in modern chemical engineering applications. A template-based approach is presented in this work to facilitate the construction and documentation of the models and enable their maintenance for reuse in a wider application range. Based on a model...

  10. Elastic models application for thorax image registration

    International Nuclear Information System (INIS)

    Correa Prado, Lorena S; Diaz, E Andres Valdez; Romo, Raul

    2007-01-01

    This work consist of the implementation and evaluation of elastic alignment algorithms of biomedical images, which were taken at thorax level and simulated with the 4D NCAT digital phantom. Radial Basis Functions spatial transformations (RBF), a kind of spline, which allows carrying out not only global rigid deformations but also local elastic ones were applied, using a point-matching method. The applied functions were: Thin Plate Spline (TPS), Multiquadric (MQ) Gaussian and B-Spline, which were evaluated and compared by means of calculating the Target Registration Error and similarity measures between the registered images (the squared sum of intensity differences (SSD) and correlation coefficient (CC)). In order to value the user incurred error in the point-matching and segmentation tasks, two algorithms were also designed that calculate the Fiduciary Localization Error. TPS and MQ were demonstrated to have better performance than the others. It was proved RBF represent an adequate model for approximating the thorax deformable behaviour. Validation algorithms showed the user error was not significant

  11. Steam generator asset management model application

    International Nuclear Information System (INIS)

    Pop, M. G.; Shoemaker, P.; Colgan, K.; Griffith, J.

    2008-01-01

    An advanced economic model in SG Asset Management, called B-Factor Methodology Tool, was developed by AREVA NP (Patent Pending), and used during the summer of 2006. The Tool allowed prediction of the future cost for a selected combination of mitigation techniques at a utility, while considering the tube deposit evolution and its effects on their particular SGs. The Tool, which was presented in its basic theoretical form at the ICAPP Meeting in Nice in 2007, has been greatly improved and was applied again at the end of 2007 at another utility. The elements of the B-Factor Methodology are the annual and cumulative net present value, and the annual escalated direct and indirect costs/benefits. All these are relative to a base case for a selected combination of mitigation techniques, considering the tube deposit evolution and its effects on the SG tubing area and SG pressure losses and ultimately on the plant power production. This paper will present the actual progress made in improving the Tool during 2007 so that it successfully predicts the optimum combination of various SG maintenance activities for any given utility. Simulated results of the operation of the B-Factor Methodology Tool for a complex scenario of Asset Management reasoning are also presented. (authors)

  12. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  13. A Surface Modeling Paradigm for Electromagnetic Applications in Aerospace Structures

    OpenAIRE

    Jha, RM; Bokhari, SA; Sudhakar, V; Mahapatra, PR

    1989-01-01

    A systematic approach has been developed to model the surfaces encountered in aerospace engineering for EM applications. The basis of this modeling is the quadric canonical shapes which are the coordinate surfaces of the Eisenhart Coordinate systems. The building blocks are visualized as sections of quadric cylinders and surfaces of revolution. These truncated quadrics can successfully model realistic aerospace structures which are termed a s hybrid quadrics, of which the satellite launch veh...

  14. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demon...... operating modes. The FBR example illustrates how the modeling development effort can be managed by proper strategies including decomposition and reuse....

  15. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  16. Top-Down Enterprise Application Integration with Reference Models

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2000-11-01

    Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

  17. An investigation of modelling and design for software service applications.

    Science.gov (United States)

    Anjum, Maria; Budgen, David

    2017-01-01

    Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the 'design model'. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.

  18. The use of the barbell cluster ANOVA design for the assessment of Environmental Pollution (1987): a case study, Wigierski National Park, NE Poland

    Energy Technology Data Exchange (ETDEWEB)

    Migaszewski, Zdzislaw M. [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland)]. E-mail: zmig@pu.kielce.pl; Galuszka, Agnieszka [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland); Paslaski, Piotr [Central Chemical Laboratory of the Polish Geological Institute, ul. Rakowiecka 4, 00-975 Warsaw (Poland)

    2005-01-01

    This report presents an assessment of chemical variability in natural ecosystems of Wigierski National Park (NE Poland) derived from the calculation of geochemical baselines using a barbell cluster ANOVA design. This method enabled us to obtain statistically valid information with a minimum number of samples collected. Results of summary statistics are presented for elemental concentrations in the soil horizons-O (Ol + Ofh), -A and -B, 1- and 2-year old Pinus sylvestris L. (Scots pine) needles, pine bark and Hypogymnia physodes (L.) Nyl. (lichen) thalli, as well as pH and TOC. The scope of this study also encompassed S and C stable isotope determinations and SEM examinations on Scots pine needles. The variability for S and trace metals in soils and plant bioindicators is primarily governed by parent material lithology and to a lesser extent by anthropogenic factors. This fact enabled us to study concentrations that are close to regional background levels. - The barbell cluster ANOVA design allowed the number of samples collected to be reduced to a minimum.

  19. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  20. Studying and modelling variable density turbulent flows for industrial applications

    Energy Technology Data Exchange (ETDEWEB)

    Chabard, J.P.; Simonin, O.; Caruso, A.; Delalondre, C.; Dalsecco, S.; Mechitoua, N.

    1996-07-01

    Industrial applications are presented in the various fields of interest for EDF. A first example deals with transferred electric arcs couplings flow and thermal transfer in the arc and in the bath of metal and is related with applications of electricity. The second one is the combustion modelling in burners of fossil power plants. The last one comes from the nuclear power plants and concerns the stratified flows in a nuclear reactor building. (K.A.). 18 refs.

  1. Nuclear model developments in FLUKA for present and future applications

    Science.gov (United States)

    Cerutti, Francesco; Empl, Anton; Fedynitch, Anatoli; Ferrari, Alfredo; Ruben, GarciaAlia; Sala, Paola R.; Smirnov, George; Vlachoudis, Vasilis

    2017-09-01

    The FLUKAS code [1-3] is used in research laboratories all around the world for challenging applications spanning a very wide range of energies, projectiles and targets. FLUKAS is also extensively used for in hadrontherapy research studies and clinical planning systems. In this paper some of the recent developments in the FLUKAS nuclear physics models of relevance for very different application fields including medical physics are presented. A few examples are shown demonstrating the effectiveness of the upgraded code.

  2. Studying and modelling variable density turbulent flows for industrial applications

    International Nuclear Information System (INIS)

    Chabard, J.P.; Simonin, O.; Caruso, A.; Delalondre, C.; Dalsecco, S.; Mechitoua, N.

    1996-07-01

    Industrial applications are presented in the various fields of interest for EDF. A first example deals with transferred electric arcs couplings flow and thermal transfer in the arc and in the bath of metal and is related with applications of electricity. The second one is the combustion modelling in burners of fossil power plants. The last one comes from the nuclear power plants and concerns the stratified flows in a nuclear reactor building. (K.A.)

  3. Systems modeling and simulation applications for critical care medicine

    Science.gov (United States)

    2012-01-01

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718

  4. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  5. AUTOMOTIVE APPLICATIONS OF EVOLVING TAKAGI-SUGENO-KANG FUZZY MODELS

    Directory of Open Access Journals (Sweden)

    Radu-Emil Precup

    2017-08-01

    Full Text Available This paper presents theoretical and application results concerning the development of evolving Takagi-Sugeno-Kang fuzzy models for two dynamic systems, which will be viewed as controlled processes, in the field of automotive applications. The two dynamic systems models are nonlinear dynamics of the longitudinal slip in the Anti-lock Braking Systems (ABS and the vehicle speed in vehicles with the Continuously Variable Transmission (CVT systems. The evolving Takagi-Sugeno-Kang fuzzy models are obtained as discrete-time fuzzy models by incremental online identification algorithms. The fuzzy models are validated against experimental results in the case of the ABS and the first principles simulation results in the case of the vehicle with the CVT.

  6. Algebraic Modeling of Topological and Computational Structures and Applications

    CERN Document Server

    Theodorou, Doros; Stefaneas, Petros; Kauffman, Louis

    2017-01-01

    This interdisciplinary book covers a wide range of subjects, from pure mathematics (knots, braids, homotopy theory, number theory) to more applied mathematics (cryptography, algebraic specification of algorithms, dynamical systems) and concrete applications (modeling of polymers and ionic liquids, video, music and medical imaging). The main mathematical focus throughout the book is on algebraic modeling with particular emphasis on braid groups. The research methods include algebraic modeling using topological structures, such as knots, 3-manifolds, classical homotopy groups, and braid groups. The applications address the simulation of polymer chains and ionic liquids, as well as the modeling of natural phenomena via topological surgery. The treatment of computational structures, including finite fields and cryptography, focuses on the development of novel techniques. These techniques can be applied to the design of algebraic specifications for systems modeling and verification. This book is the outcome of a w...

  7. Reviewing model application to support animal health decision making.

    Science.gov (United States)

    Singer, Alexander; Salman, Mo; Thulke, Hans-Hermann

    2011-04-01

    Animal health is of societal importance as it affects human welfare, and anthropogenic interests shape decision making to assure animal health. Scientific advice to support decision making is manifold. Modelling, as one piece of the scientific toolbox, is appreciated for its ability to describe and structure data, to give insight in complex processes and to predict future outcome. In this paper we study the application of scientific modelling to support practical animal health decisions. We reviewed the 35 animal health related scientific opinions adopted by the Animal Health and Animal Welfare Panel of the European Food Safety Authority (EFSA). Thirteen of these documents were based on the application of models. The review took two viewpoints, the decision maker's need and the modeller's approach. In the reviewed material three types of modelling questions were addressed by four specific model types. The correspondence between tasks and models underpinned the importance of the modelling question in triggering the modelling approach. End point quantifications were the dominating request from decision makers, implying that prediction of risk is a major need. However, due to knowledge gaps corresponding modelling studies often shed away from providing exact numbers. Instead, comparative scenario analyses were performed, furthering the understanding of the decision problem and effects of alternative management options. In conclusion, the most adequate scientific support for decision making - including available modelling capacity - might be expected if the required advice is clearly stated. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    Science.gov (United States)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  9. Hydraulic modeling development and application in water resources engineering

    Science.gov (United States)

    Simoes, Francisco J.; Yang, Chih Ted; Wang, Lawrence K.

    2015-01-01

    The use of modeling has become widespread in water resources engineering and science to study rivers, lakes, estuaries, and coastal regions. For example, computer models are commonly used to forecast anthropogenic effects on the environment, and to help provide advanced mitigation measures against catastrophic events such as natural and dam-break floods. Linking hydraulic models to vegetation and habitat models has expanded their use in multidisciplinary applications to the riparian corridor. Implementation of these models in software packages on personal desktop computers has made them accessible to the general engineering community, and their use has been popularized by the need of minimal training due to intuitive graphical user interface front ends. Models are, however, complex and nontrivial, to the extent that even common terminology is sometimes ambiguous and often applied incorrectly. In fact, many efforts are currently under way in order to standardize terminology and offer guidelines for good practice, but none has yet reached unanimous acceptance. This chapter provides a view of the elements involved in modeling surface flows for the application in environmental water resources engineering. It presents the concepts and steps necessary for rational model development and use by starting with the exploration of the ideas involved in defining a model. Tangible form of those ideas is provided by the development of a mathematical and corresponding numerical hydraulic model, which is given with a substantial amount of detail. The issues of model deployment in a practical and productive work environment are also addressed. The chapter ends by presenting a few model applications highlighting the need for good quality control in model validation.

  10. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  11. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  12. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  13. An investigation of modelling and design for software service applications

    Science.gov (United States)

    2017-01-01

    Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the ‘design model’. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model. PMID:28489905

  14. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  15. How Participatory Should Environmental Governance Be? Testing the Applicability of the Vroom-Yetton-Jago Model in Public Environmental Decision-Making

    Science.gov (United States)

    Lührs, Nikolas; Jager, Nicolas W.; Challies, Edward; Newig, Jens

    2018-02-01

    Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.

  16. Animal models of enterovirus 71 infection: applications and limitations

    Science.gov (United States)

    2014-01-01

    Human enterovirus 71 (EV71) has emerged as a neuroinvasive virus that is responsible for several outbreaks in the Asia-Pacific region over the past 15 years. Appropriate animal models are needed to understand EV71 neuropathogenesis better and to facilitate the development of effective vaccines and drugs. Non-human primate models have been used to characterize and evaluate the neurovirulence of EV71 after the early outbreaks in late 1990s. However, these models were not suitable for assessing the neurovirulence level of the virus and were associated with ethical and economic difficulties in terms of broad application. Several strategies have been applied to develop mouse models of EV71 infection, including strategies that employ virus adaption and immunodeficient hosts. Although these mouse models do not closely mimic human disease, they have been applied to determine the pathogenesis of and treatment and prevention of the disease. EV71 receptor-transgenic mouse models have recently been developed and have significantly advanced our understanding of the biological features of the virus and the host-parasite interactions. Overall, each of these models has advantages and disadvantages, and these models are differentially suited for studies of EV71 pathogenesis and/or the pre-clinical testing of antiviral drugs and vaccines. In this paper, we review the characteristics, applications and limitation of these EV71 animal models, including non-human primate and mouse models. PMID:24742252

  17. Electromagnetic Modelling of MMIC CPWs for High Frequency Applications

    Science.gov (United States)

    Sinulingga, E. P.; Kyabaggu, P. B. K.; Rezazadeh, A. A.

    2018-02-01

    Realising the theoretical electrical characteristics of components through modelling can be carried out using computer-aided design (CAD) simulation tools. If the simulation model provides the expected characteristics, the fabrication process of Monolithic Microwave Integrated Circuit (MMIC) can be performed for experimental verification purposes. Therefore improvements can be suggested before mass fabrication takes place. This research concentrates on development of MMIC technology by providing accurate predictions of the characteristics of MMIC components using an improved Electromagnetic (EM) modelling technique. The knowledge acquired from the modelling and characterisation process in this work can be adopted by circuit designers for various high frequency applications.

  18. Modelling of a Hybrid Energy System for Autonomous Application

    Directory of Open Access Journals (Sweden)

    Yang He

    2013-10-01

    Full Text Available A hybrid energy system (HES is a trending power supply solution for autonomous devices. With the help of an accurate system model, the HES development will be efficient and oriented. In spite of various precise unit models, a HES system is hardly developed. This paper proposes a system modelling approach, which applies the power flux conservation as the governing equation and adapts and modifies unit models of solar cells, piezoelectric generators, a Li-ion battery and a super-capacitor. A generalized power harvest, storage and management strategy is also suggested to adapt to various application scenarios.

  19. Constitutive Modeling of Geomaterials Advances and New Applications

    CERN Document Server

    Zhang, Jian-Min; Zheng, Hong; Yao, Yangping

    2013-01-01

    The Second International Symposium on Constitutive Modeling of Geomaterials: Advances and New Applications (IS-Model 2012), is to be held in Beijing, China, during October 15-16, 2012. The symposium is organized by Tsinghua University, the International Association for Computer Methods and Advances in Geomechanics (IACMAG), the Committee of Numerical and Physical Modeling of Rock Mass, Chinese Society for Rock Mechanics and Engineering, and the Committee of Constitutive Relations and Strength Theory, China Institution of Soil Mechanics and Geotechnical Engineering, China Civil Engineering Society. This Symposium follows the first successful International Workshop on Constitutive Modeling held in Hong Kong, which was organized by Prof. JH Yin in 2007.   Constitutive modeling of geomaterials has been an active research area for a long period of time. Different approaches have been used in the development of various constitutive models. A number of models have been implemented in the numerical analyses of geote...

  20. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  1. 12th Workshop on Stochastic Models, Statistics and Their Applications

    CERN Document Server

    Rafajłowicz, Ewaryst; Szajowski, Krzysztof

    2015-01-01

    This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

  2. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  3. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  4. Applications of system dynamics modelling to support health policy.

    Science.gov (United States)

    Atkinson, Jo-An M; Wells, Robert; Page, Andrew; Dominello, Amanda; Haines, Mary; Wilson, Andrew

    2015-07-09

    The value of systems science modelling methods in the health sector is increasingly being recognised. Of particular promise is the potential of these methods to improve operational aspects of healthcare capacity and delivery, analyse policy options for health system reform and guide investments to address complex public health problems. Because it lends itself to a participatory approach, system dynamics modelling has been a particularly appealing method that aims to align stakeholder understanding of the underlying causes of a problem and achieve consensus for action. The aim of this review is to determine the effectiveness of system dynamics modelling for health policy, and explore the range and nature of its application. A systematic search was conducted to identify articles published up to April 2015 from the PubMed, Web of Knowledge, Embase, ScienceDirect and Google Scholar databases. The grey literature was also searched. Papers eligible for inclusion were those that described applications of system dynamics modelling to support health policy at any level of government. Six papers were identified, comprising eight case studies of the application of system dynamics modelling to support health policy. No analytic studies were found that examined the effectiveness of this type of modelling. Only three examples engaged multidisciplinary stakeholders in collective model building. Stakeholder participation in model building reportedly facilitated development of a common 'mental map' of the health problem, resulting in consensus about optimal policy strategy and garnering support for collaborative action. The paucity of relevant papers indicates that, although the volume of descriptive literature advocating the value of system dynamics modelling is considerable, its practical application to inform health policy making is yet to be routinely applied and rigorously evaluated. Advances in software are allowing the participatory model building approach to be extended to

  5. Applications of computational modeling in metabolic engineering of yeast

    DEFF Research Database (Denmark)

    Kerkhoven, Eduard J.; Lahtvee, Petri-Jaan; Nielsen, Jens

    2015-01-01

    a preferred flux distribution. These methods point to strategies for altering gene expression; however, fluxes are often controlled by post-transcriptional events. Moreover, GEMs are usually not taking into account metabolic regulation, thermodynamics and enzyme kinetics. To facilitate metabolic engineering......, it is necessary to expand the modeling of metabolism to consider kinetics of individual processes. This review will give an overview about models available for metabolic engineering of yeast and discusses their applications....

  6. Application of postured human model for SAR measurements

    Science.gov (United States)

    Vuchkovikj, M.; Munteanu, I.; Weiland, T.

    2013-07-01

    In the last two decades, the increasing number of electronic devices used in day-to-day life led to a growing interest in the study of the electromagnetic field interaction with biological tissues. The design of medical devices and wireless communication devices such as mobile phones benefits a lot from the bio-electromagnetic simulations in which digital human models are used. The digital human models currently available have an upright position which limits the research activities in realistic scenarios, where postured human bodies must be considered. For this reason, a software application called "BodyFlex for CST STUDIO SUITE" was developed. In its current version, this application can deform the voxel-based human model named HUGO (Dipp GmbH, 2010) to allow the generation of common postures that people use in normal life, ensuring the continuity of tissues and conserving the mass to an acceptable level. This paper describes the enhancement of the "BodyFlex" application, which is related to the movements of the forearm and the wrist of a digital human model. One of the electromagnetic applications in which the forearm and the wrist movement of a voxel based human model has a significant meaning is the measurement of the specific absorption rate (SAR) when a model is exposed to a radio frequency electromagnetic field produced by a mobile phone. Current SAR measurements of the exposure from mobile phones are performed with the SAM (Specific Anthropomorphic Mannequin) phantom which is filled with a dispersive but homogeneous material. We are interested what happens with the SAR values if a realistic inhomogeneous human model is used. To this aim, two human models, a homogeneous and an inhomogeneous one, in two simulation scenarios are used, in order to examine and observe the differences in the results for the SAR values.

  7. On the limits of application of the Stephens model

    International Nuclear Information System (INIS)

    Issa, A.; Piepenbring, R.

    1977-01-01

    The limits of the rotation alignment model of Stephens are studied. The conditions of applicability of the assumption of constant j for a unique parity isolated sub-shell (extended to N=4 and N=3) are discussed and explanations are given. A correct treatment of the eigenstates of the intrinsic motion allows however a simple extension of the model to non-isolated sub-shells without enlarging the basis of diagonalisation [fr

  8. Polynomial model inversion control: numerical tests and applications

    OpenAIRE

    Novara, Carlo

    2015-01-01

    A novel control design approach for general nonlinear systems is described in this paper. The approach is based on the identification of a polynomial model of the system to control and on the on-line inversion of this model. Extensive simulations are carried out to test the numerical efficiency of the approach. Numerical examples of applicative interest are presented, concerned with control of the Duffing oscillator, control of a robot manipulator and insulin regulation in a type 1 diabetic p...

  9. DSC, FT-IR, NIR, NIR-PCA and NIR-ANOVA for determination of chemical stability of diuretic drugs: impact of excipients

    Directory of Open Access Journals (Sweden)

    Gumieniczek Anna

    2018-03-01

    Full Text Available It is well known that drugs can directly react with excipients. In addition, excipients can be a source of impurities that either directly react with drugs or catalyze their degradation. Thus, binary mixtures of three diuretics, torasemide, furosemide and amiloride with different excipients, i.e. citric acid anhydrous, povidone K25 (PVP, magnesium stearate (Mg stearate, lactose, D-mannitol, glycine, calcium hydrogen phosphate anhydrous (CaHPO4 and starch, were examined to detect interactions. High temperature and humidity or UV/VIS irradiation were applied as stressing conditions. Differential scanning calorimetry (DSC, FT-IR and NIR were used to adequately collect information. In addition, chemometric assessments of NIR signals with principal component analysis (PCA and ANOVA were applied.

  10. Geospatial application of the Water Erosion Prediction Project (WEPP) model

    Science.gov (United States)

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2013-01-01

    At the hillslope profile and/or field scale, a simple Windows graphical user interface (GUI) is available to easily specify the slope, soil, and management inputs for application of the USDA Water Erosion Prediction Project (WEPP) model. Likewise, basic small watershed configurations of a few hillslopes and channels can be created and simulated with this GUI. However,...

  11. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans

    2004-01-01

    The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design...

  12. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    Application of mixed models for the assessment genotype and environment interactions in cotton ( Gossypium hirsutum ) cultivars in Mozambique. ... The cultivars ISA 205, STAM 42 and REMU 40 showed superior productivity when they were selected by the Harmonic Mean of Genotypic Values (HMGV) criterion in relation ...

  13. Application of multilinear regression analysis in modeling of soil ...

    African Journals Online (AJOL)

    The application of Multi-Linear Regression Analysis (MLRA) model for predicting soil properties in Calabar South offers a technical guide and solution in foundation designs problems in the area. Forty-five soil samples were collected from fifteen different boreholes at a different depth and 270 tests were carried out for CBR, ...

  14. Mathematical annuity models application in cash flow analysis ...

    African Journals Online (AJOL)

    Mathematical annuity models application in cash flow analysis. ... We also compare the cost efficiency between Amortisation and Sinking fund loan repayment as prevalent in financial institutions. Keywords: Annuity, Amortisation, Sinking Fund, Present and Future Value Annuity, Maturity date and Redemption value.

  15. Application of the rainfall infiltration breakthrough (RIB) model for ...

    African Journals Online (AJOL)

    Application of the rainfall infiltration breakthrough (RIB) model for groundwater recharge estimation in west coastal South Africa. ... the data from Oudebosch with different rainfall and groundwater abstraction inputs are simulated to explore individual effects on water levels as well as recharge rate estimated on a daily basis.

  16. Application of Markovian model to school enrolment projection ...

    African Journals Online (AJOL)

    Application of Markovian model to school enrolment projection process. VU Ekhosuehi, AA Osagiede. Abstract. No Abstract. Global Journal of Mathematical Sciences Vol. 5(1) 2006: 9-16. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  17. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  18. Credibilistic programming an introduction to models and applications

    CERN Document Server

    2014-01-01

    It provides fuzzy programming approach to solve real-life decision problems in fuzzy environment. Within the framework of credibility theory, it provides a self-contained, comprehensive and up-to-date presentation of fuzzy programming models, algorithms and applications in portfolio analysis.

  19. Applicability of the PROSPECT model for Norway spruce needles

    NARCIS (Netherlands)

    Malenovsky, Z.; Albrechtova, J.; Lhotakova, Z.; Zurita Milla, R.; Clevers, J.G.P.W.; Schaepman, M.E.; Cudlin, P.

    2006-01-01

    The potential applicability of the leaf radiative transfer model PROSPECT (version 3.01) was tested for Norway spruce (Picea abies (L.) Karst.) needles collected from stress resistant and resilient trees. Direct comparison of the measured and simulated leaf optical properties between 450¿1000 nm

  20. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the evolutionary computation (EC). Two diff- erent ANN techniques, the feed forward back propagation (FFBP) and generalized regression neural network (GRNN) methods ...

  1. An Application Of Receptor Modeling To Identify Airborne Particulate ...

    African Journals Online (AJOL)

    An Application Of Receptor Modeling To Identify Airborne Particulate Sources In Lagos, Nigeria. FS Olise, OK Owoade, HB Olaniyi. Abstract. There have been no clear demarcations between industrial and residential areas of Lagos with focus on industry as the major source. There is need to identify potential source types in ...

  2. Crop model usefulness in drylands of southern Africa: an application ...

    African Journals Online (AJOL)

    Data limitations in southern Africa frequently hinder adequate assessment of crop models before application. ... three locations to represent varying cropping and physical conditions in southern Africa, i.e. maize and sorghum (Mohale's Hoek, Lesotho and Big Bend, Swaziland) and maize and groundnut (Lilongwe, Malawi).

  3. A Case Study Application Of Time Study Model In Paint ...

    African Journals Online (AJOL)

    This paper presents a case study in the development and application of a time study model in a paint manufacturing company. The organization specializes in the production of different grades of paint and paint containers. The paint production activities include; weighing of raw materials, drying of raw materials, dissolving ...

  4. A spatial haplotype copying model with applications to genotype imputation.

    Science.gov (United States)

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.

  5. Degradation modeling with application to aging and maintenance effectiveness evaluations

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hsu, F.; Subduhi, M.; Vesely, W.E.

    1990-01-01

    This paper describes a modeling approach to analyze component degradation and failure data to understand the aging process of components. As used here, degradation modeling is the analysis of information on component degradation in order to develop models of the process and its implications. This particular modeling focuses on the analysis of the times of component degradations, to model how the rate of degradation changes with the age of the component. The methodology presented also discusses the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of component degradation rates and component failure rates from plant-specific data. The statistical techniques which are developed and applied allow aging trends to be effectively identified in the degradation data, and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures also are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends. 2 refs., 8 figs

  6. Degradation modeling with application to aging and maintenance effectiveness evaluations

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.; Hsu, F.; Subudhi, M.

    1991-01-01

    This paper describes a modeling approach to analyze light water reactor component degradation and failure data to understand the aging process of components. As used here, degradation modeling is the analysis of information on component degradation in order to develop models of the process and its implications. This particular modeling focuses on the analysis of the times of component degradations, to model how the rate of degradation changes with the age of the component. The methodology presented also discusses the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of component degradation rates and component failure rates from plant-specific data. The statistical techniques which are developed and applied allow aging trends to be effectively identified in the degradation data, and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures also are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends

  7. FUNCTIONAL MODELLING FOR FAULT DIAGNOSIS AND ITS APPLICATION FOR NPP

    Directory of Open Access Journals (Sweden)

    MORTEN LIND

    2014-12-01

    Full Text Available The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM, which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  8. Functional Modelling for fault diagnosis and its application for NPP

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang, Xin Xin [Dept. of Electrical Engineering, Technical University of Denmark, Kongens Lyngby (Denmark)

    2014-12-15

    The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM), which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  9. Functional Modelling for fault diagnosis and its application for NPP

    International Nuclear Information System (INIS)

    Lind, Morten; Zhang, Xin Xin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM), which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  10. Cellular potts models multiscale extensions and biological applications

    CERN Document Server

    Scianna, Marco

    2013-01-01

    A flexible, cell-level, and lattice-based technique, the cellular Potts model accurately describes the phenomenological mechanisms involved in many biological processes. Cellular Potts Models: Multiscale Extensions and Biological Applications gives an interdisciplinary, accessible treatment of these models, from the original methodologies to the latest developments. The book first explains the biophysical bases, main merits, and limitations of the cellular Potts model. It then proposes several innovative extensions, focusing on ways to integrate and interface the basic cellular Potts model at the mesoscopic scale with approaches that accurately model microscopic dynamics. These extensions are designed to create a nested and hybrid environment, where the evolution of a biological system is realistically driven by the constant interplay and flux of information between the different levels of description. Through several biological examples, the authors demonstrate a qualitative and quantitative agreement with t...

  11. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  12. Application of Markowitz Model on Romanian Stock Market

    Directory of Open Access Journals (Sweden)

    Zavera Ioana Coralia

    2017-04-01

    Full Text Available Performance evaluation of financial instruments has become a concern for more and more economists, while security trading activities have developed over time. “Modern portfolio theory” comprises statistical and mathematical models which describe various ways in order to evaluate and especially analyse profitability and risk of these portfolios. This article offers an application of this type of model on Romanian stock market, the Markowitz model, by focusing on portfolios comprising three securities, and determining the efficient frontier and the minimum variance portfolio.

  13. Equicontrollability and its application to model-following and decoupling.

    Science.gov (United States)

    Curran, R. T.

    1971-01-01

    Discussion of 'model following,' a term used to describe a class of problems characterized by having two dynamic systems, generically known as the 'plant' and the 'model,' it being required to find a controller to attach to the plant so as to make the resultant compensated system behave, in an input/output sense, in the same way as the model. The approach presented to the problem takes a structural point of view. The result is a complex but informative definition which solves the problem as posed. The application of both the algorithm and its basis, equicontrollability, to the decoupling problem is considered.

  14. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  15. New applications of a model of electromechanical impedance for SHM

    Science.gov (United States)

    Pavelko, Vitalijs

    2014-03-01

    The paper focuses on the further development of the model of the electromechanical impedance (EMI) of the piezoceramics transducer (PZT) and its application for aircraft structural health monitoring (SHM). There was obtained an expression of the electromechanical impedance common to any dimension of models (1D, 2D, 3D), and directly independent from imposed constraints. Determination of the dynamic response of the system "host structure - PZT", which is crucial for the practical application supposes the use of modal analysis. This allows to get a general tool to determine EMI regardless of the specific features of a particular application. Earlier there was considered the technology of separate determination of the dynamic response for the PZT and the structural element". Here another version that involves the joint modal analysis of the entire system "host structure - PZT" is presented. As a result, the dynamic response is obtained in the form of modal decomposition of transducer mechanical strains. The use of models for the free and constrained transducer, analysis of the impact of the adhesive layer to the EMI is demonstrated. In all cases there was analyzed the influence of the dimension of the model (2D and 3D). The validity of the model is confirmed by experimental studies. Correlation between the fatigue crack length in a thin-walled Al plate and EMI of embedded PZT was simulated and compared with test result.

  16. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Systems and models with anticipation in physics and its applications

    International Nuclear Information System (INIS)

    Makarenko, A

    2012-01-01

    Investigations of recent physics processes and real applications of models require the new more and more improved models which should involved new properties. One of such properties is anticipation (that is taking into accounting some advanced effects).It is considered the special kind of advanced systems – namely a strong anticipatory systems introduced by D. Dubois. Some definitions, examples and peculiarities of solutions are described. The main feature is presumable multivaluedness of the solutions. Presumable physical examples of such systems are proposed: self-organization problems; dynamical chaos; synchronization; advanced potentials; structures in micro-, meso- and macro- levels; cellular automata; computing; neural network theory. Also some applications for modeling social, economical, technical and natural systems are described.

  18. Modelling of a cross flow evaporator for CSP application

    DEFF Research Database (Denmark)

    Sørensen, Kim; Franco, Alessandro; Pelagotti, Leonardo

    2016-01-01

    ) applications. Heat transfer and pressure drop prediction methods are an important tool for design and modelling of diabatic, two-phase, shell-side flow over a horizontal plain tubes bundle for a vertical up-flow evaporator. With the objective of developing a model for a specific type of cross flow evaporator...... the available correlations for the definition of two-phase flow heat transfer, void fraction and pressure drop in connection with the operation of steam generators, focuses attention on a comparison of the results obtained using several different models resulting by different combination of correlations......Heat exchangers consisting of bundles of horizontal plain tubes with boiling on the shell side are widely used in industrial and energy systems applications. A recent particular specific interest for the use of this special heat exchanger is in connection with Concentrated Solar Power (CSP...

  19. Model Oriented Application Generation for Industrial Control Systems

    CERN Document Server

    Copy, B; Blanco Vinuela, E; Fernandez Adiego, B; Nogueira Ferandes, R; Prieto Barreiro, I

    2011-01-01

    The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology and a collection of development tools that standardizes the design of industrial control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to ease extensibility and maintenance of the framework, introducing a stable metamodel, a set of platformindependent models and platformspecific configurations against which code generation plugins and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider and SIEMENS PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UNICOS metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuratio...

  20. Modelling of Electrokinetic Processes in Civil and Environmental Engineering Applications

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2011-01-01

    conditions are assumed between the aqueous species and the solid matrix for a set of feasible chemical equilibrium reactions defined for each specific application. A module for re-establishing the chemical equilibrium has been developed and included in the system for this purpose. Changes in the porosity......A mathematical model for the electrokinetic phenomena is described. Numerical simulations of different applications of electrokinetic techniques to the fields of civil and environmental engineering are included, showing the versatility and consistency of the model. The electrokinetics phenomena......-Nernst-Planck system of equations, accounting for ionic migration, chemical diffusion and advection is used for modeling the transport process. The advection term contributor is studied by including in the system the water transport through the porous media, mainly due to electroosmosis. The pore solution filling...

  1. Modelling, simulation and applications of longitudinal train dynamics

    Science.gov (United States)

    Cole, Colin; Spiryagin, Maksym; Wu, Qing; Sun, Yan Quan

    2017-10-01

    Significant developments in longitudinal train simulation and an overview of the approaches to train models and modelling vehicle force inputs are firstly presented. The most important modelling task, that of the wagon connection, consisting of energy absorption devices such as draft gears and buffers, draw gear stiffness, coupler slack and structural stiffness is then presented. Detailed attention is given to the modelling approaches for friction wedge damped and polymer draft gears. A significant issue in longitudinal train dynamics is the modelling and calculation of the input forces - the co-dimensional problem. The need to push traction performances higher has led to research and improvement in the accuracy of traction modelling which is discussed. A co-simulation method that combines longitudinal train simulation, locomotive traction control and locomotive vehicle dynamics is presented. The modelling of other forces, braking propulsion resistance, curve drag and grade forces are also discussed. As extensions to conventional longitudinal train dynamics, lateral forces and coupler impacts are examined in regards to interaction with wagon lateral and vertical dynamics. Various applications of longitudinal train dynamics are then presented. As an alternative to the tradition single wagon mass approach to longitudinal train dynamics, an example incorporating fully detailed wagon dynamics is presented for a crash analysis problem. Further applications of starting traction, air braking, distributed power, energy analysis and tippler operation are also presented.

  2. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  3. Medical applications of model-based dynamic thermography

    Science.gov (United States)

    Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech

    2001-03-01

    The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.

  4. Challenges of Microgrids in Remote Communities: A STEEP Model Application

    Directory of Open Access Journals (Sweden)

    Daniel Akinyele

    2018-02-01

    Full Text Available There is a growing interest in the application of microgrids around the world because of their potential for achieving a flexible, reliable, efficient and smart electrical grid system and supplying energy to off-grid communities, including their economic benefits. Several research studies have examined the application issues of microgrids. However, a lack of in-depth considerations for the enabling planning conditions has been identified as a major reason why microgrids fail in several off-grid communities. This development requires research efforts that consider better strategies and framework for sustainable microgrids in remote communities. This paper first presents a comprehensive review of microgrid technologies and their applications. It then proposes the STEEP model to examine critically the failure factors based on the social, technical, economic, environmental and policy (STEEP perspectives. The model details the key dimensions and actions necessary for addressing the challenge of microgrid failure in remote communities. The study uses remote communities within Nigeria, West Africa, as case studies and demonstrates the need for the STEEP approach for better understanding of microgrid planning and development. Better insights into microgrid systems are expected to address the drawbacks and improve the situation that can lead to widespread and sustainable applications in off-grid communities around the world in the future. The paper introduces the sustainable planning framework (SPF based on the STEEP model, which can form a general basis for planning microgrids in any remote location.

  5. Application distribution model and related security attacks in VANET

    Science.gov (United States)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  6. Human eye modelling for ophthalmic simulators project for clinic applications

    International Nuclear Information System (INIS)

    Sanchez, Andrea; Santos, Adimir dos; Yoriyaz, Helio

    2002-01-01

    Most of eye tumors are treated by surgical means, which involves the enucleation of affected eyes. In terms of treatment and control of diseases, there is brachytherapy, which often utilizes small applicator of Co-60, I-125, Ru-106, Ir-192, etc. These methods are shown to be very efficient but highly cost. The objective of this work is propose a detailed simulator modelling for eye characterization. Additionally, this study can contribute to design and build a new applicator in order to reduce the cost and to allow more patients to be treated

  7. Application of online modeling to the operation of SLC

    International Nuclear Information System (INIS)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-01-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desire optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beam matching and optimization of injection and extraction efficiencies and beam transmissions. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction

  8. Application of online modeling to the operation of SLC

    International Nuclear Information System (INIS)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-02-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desired optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beamline matching and optimization of injection and extraction efficiencies and beam transmission. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction

  9. Structural Equation Modeling: Theory and Applications in Forest Management

    Directory of Open Access Journals (Sweden)

    Tzeng Yih Lam

    2012-01-01

    Full Text Available Forest ecosystem dynamics are driven by a complex array of simultaneous cause-and-effect relationships. Understanding this complex web requires specialized analytical techniques such as Structural Equation Modeling (SEM. The SEM framework and implementation steps are outlined in this study, and we then demonstrate the technique by application to overstory-understory relationships in mature Douglas-fir forests in the northwestern USA. A SEM model was formulated with (1 a path model representing the effects of successively higher layers of vegetation on late-seral herbs through processes such as light attenuation and (2 a measurement model accounting for measurement errors. The fitted SEM model suggested a direct negative effect of light attenuation on late-seral herbs cover but a direct positive effect of northern aspect. Moreover, many processes have indirect effects mediated through midstory vegetation. SEM is recommended as a forest management tool for designing silvicultural treatments and systems for attaining complex arrays of management objectives.

  10. Road Assessment Model and Pilot Application in China

    Directory of Open Access Journals (Sweden)

    Tiejun Zhang

    2014-01-01

    Full Text Available Risk assessment of roads is an effective approach for road agencies to determine safety improvement investments. It can increases the cost-effective returns in crash and injury reductions. To get a powerful Chinese risk assessment model, Research Institute of Highway (RIOH is developing China Road Assessment Programme (ChinaRAP model to show the traffic crashes in China in partnership with International Road Assessment Programme (iRAP. The ChinaRAP model is based upon RIOH’s achievements and iRAP models. This paper documents part of ChinaRAP’s research work, mainly including the RIOH model and its pilot application in a province in China.

  11. Application of 3-dimensional CAD modeling system in nuclear plants

    International Nuclear Information System (INIS)

    Suwa, Minoru; Saito, Shunji; Nobuhiro, Minoru

    1990-01-01

    Until now, the preliminary work for mutual components in nuclear plant were readied by using plastic models. Recently with the development of computer graphic techniques, we can display the components on the graphics terminal, better than with use of plastic model and actual plants. The computer model can be handled, both telescopically and microscopically. A computer technique called 3-dimensional CAD modeling system was used as the preliminary work and design system. Through application of this system, database for nuclear plants was completed in arrangement step. The data can be used for piping design, stress analysis, shop production, testing and site construction, in all steps. In addition, the data can be used for various planning works, even after starting operation of plant. This paper describes the outline of the 3-dimensional CAD modeling system. (author)

  12. Application of Metamodels to Identification of Metallic Materials Models

    Directory of Open Access Journals (Sweden)

    Maciej Pietrzyk

    2016-01-01

    Full Text Available Improvement of the efficiency of the inverse analysis (IA for various material tests was the objective of the paper. Flow stress models and microstructure evolution models of various complexity of mathematical formulation were considered. Different types of experiments were performed and the results were used for the identification of models. Sensitivity analysis was performed for all the models and the importance of parameters in these models was evaluated. Metamodels based on artificial neural network were proposed to simulate experiments in the inverse solution. Performed analysis has shown that significant decrease of the computing times could be achieved when metamodels substitute finite element model in the inverse analysis, which is the case in the identification of flow stress models. Application of metamodels gave good results for flow stress models based on closed form equations accounting for an influence of temperature, strain, and strain rate (4 coefficients and additionally for softening due to recrystallization (5 coefficients and for softening and saturation (7 coefficients. Good accuracy and high efficiency of the IA were confirmed. On the contrary, identification of microstructure evolution models, including phase transformation models, did not give noticeable reduction of the computing time.

  13. Overview on available animal models for application in leukemia research

    International Nuclear Information System (INIS)

    Borkhardt, A.; Sanchez-Garcia, I.; Cobaleda, C.; Hauer, J.

    2015-01-01

    The term ''leukemia'' encompasses a group of diseases with a variable clinical and pathological presentation. Its cellular origin, its biology and the underlying molecular genetic alterations determine the very variable and individual disease phenotype. The focus of this review is to discuss the most important guidelines to be taken into account when we aim at developing an ''ideal'' animal model to study leukemia. The animal model should mimic all the clinical, histological and molecular genetic characteristics of the human phenotype and should be applicable as a clinically predictive model. It should achieve all the requirements to be used as a standardized model adaptive to basic research as well as to pharmaceutical practice. Furthermore it should fulfill all the criteria to investigate environmental risk factors, the role of genomic mutations and be applicable for therapeutic testing. These constraints limit the usefulness of some existing animal models, which are however very valuable for basic research. Hence in this review we will primarily focus on genetically engineered mouse models (GEMMs) to study the most frequent types of childhood leukemia. GEMMs are robust models with relatively low site specific variability and which can, with the help of the latest gene modulating tools be adapted to individual clinical and research questions. Moreover they offer the possibility to restrict oncogene expression to a defined target population and regulate its expression level as well as its timely activity. Until recently it was only possible in individual cases to develop a murin model, which fulfills the above mentioned requirements. Hence the development of new regulatory elements to control targeted oncogene expression should be priority. Tightly controlled and cell specific oncogene expression can then be combined with a knock-in approach and will depict a robust murine model, which enables almost physiologic oncogene

  14. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  15. Risk Measurement and Risk Modelling Using Applications of Vine Copulas

    Directory of Open Access Journals (Sweden)

    David E. Allen

    2017-09-01

    Full Text Available This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is usually applied to pairs of securities. By contrast, Vine copulas provide greater flexibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which may be arranged and analysed in a tree structure to explore multiple dependencies. The paper features the use of Regular Vine copulas in an analysis of the co-dependencies of 10 major European Stock Markets, as represented by individual market indices and the composite STOXX 50 index. The sample runs from 2005 to the end of 2013 to permit an exploration of how correlations change indifferent economic circumstances using three different sample periods: pre-GFC (January 2005–July 2007, GFC (July 2007– September 2009, and post-GFC periods (September 2009–December 2013. The empirical results suggest that the dependencies change in a complex manner, and are subject to change in different economic circumstances. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co-dependencies. The practical application of Regular Vine metrics is demonstrated via an example of the calculation of the VaR of a portfolio made up of the indices.

  16. Structural equation modeling with EQS basic concepts, applications, and programming

    CERN Document Server

    Byrne, Barbara M

    2013-01-01

    Readers who want a less mathematical alternative to the EQS manual will find exactly what they're looking for in this practical text. Written specifically for those with little to no knowledge of structural equation modeling (SEM) or EQS, the author's goal is to provide a non-mathematical introduction to the basic concepts of SEM by applying these principles to EQS, Version 6.1. The book clearly demonstrates a wide variety of SEM/EQS applications that include confirmatory factor analytic and full latent variable models. Written in a "user-friendly" style, the author "walks" the reader through the varied steps involved in the process of testing SEM models: model specification and estimation, assessment of model fit, EQS output, and interpretation of findings. Each of the book's applications is accompanied by: a statement of the hypothesis being tested, a schematic representation of the model, explanations of the EQS input and output files, tips on how to use the pull-down menus, and the data file upon which ...

  17. MOGO: Model-Oriented Global Optimization of Petascale Applications

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.; Shende, Sameer S.

    2012-09-14

    The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.

  18. Modelling and application of the inactivation of microorganism

    International Nuclear Information System (INIS)

    Oğuzhan, P.; Yangılar, F.

    2013-01-01

    Prevention of consuming contaminated food with toxic microorganisms causing infections and consideration of food protection and new microbial inactivation methods are obligatory situations. Food microbiology is mainly related with unwanted microorganisms spoiling foods during processing and transporting stages and causing diseases. Determination of pathogen microorganisms is important for human health to define and prevent dangers and elongate shelf life. Inactivation of pathogen microorganisms can provide food security and reduce nutrient losses. Microbial inactivation which is using methods of food protection such as food safety and fresh. With this aim, various methods are used such as classical thermal processes (pasteurisation, sterilisation), pressured electrical field (PEF), ionised radiation, high pressure, ultrasonic waves and plasma sterilisation. Microbial inactivation modelling is a secure and effective method in food production. A new microbiological application can give useful results for risk assessment in food, inactivation of microorganisms and improvement of shelf life. Application and control methods should be developed and supported by scientific research and industrial applications

  19. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  20. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  1. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  2. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  3. Bifurcation software in Matlab with applications in neuronal modeling.

    Science.gov (United States)

    Govaerts, Willy; Sautois, Bart

    2005-02-01

    Many biological phenomena, notably in neuroscience, can be modeled by dynamical systems. We describe a recent improvement of a Matlab software package for dynamical systems with applications to modeling single neurons and all-to-all connected networks of neurons. The new software features consist of an object-oriented approach to bifurcation computations and the partial inclusion of C-code to speed up the computation. As an application, we study the origin of the spiking behaviour of neurons when the equilibrium state is destabilized by an incoming current. We show that Class II behaviour, i.e. firing with a finite frequency, is possible even if the destabilization occurs through a saddle-node bifurcation. Furthermore, we show that synchronization of an all-to-all connected network of such neurons with only excitatory connections is also possible in this case.

  4. Instructional Storytelling: Application of the Clinical Judgment Model in Nursing.

    Science.gov (United States)

    Timbrell, Jessica

    2017-05-01

    Little is known about the teaching and learning implications of instructional storytelling (IST) in nursing education or its potential connection to nursing theory. The literature establishes storytelling as a powerful teaching-learning method in the educational, business, humanities, and health sectors, but little exploration exists that is specific to nursing. An example of a story demonstrating application of the domains of Tanner's clinical judgment model links storytelling with learning outcomes appropriate for the novice nursing student. Application of Tanner's clinical judgment model offers consistency of learning experience while preserving the creativity inherent in IST. Further research into student learning outcomes achievement using IST is warranted as a step toward establishing best practices with IST in nursing education. [J Nurs Educ. 2017;56(5):305-308.]. Copyright 2017, SLACK Incorporated.

  5. Powder consolidation using cold spray process modeling and emerging applications

    CERN Document Server

    Moridi, Atieh

    2017-01-01

    This book first presents different approaches to modeling of the cold spray process with the aim of extending current understanding of its fundamental principles and then describes emerging applications of cold spray. In the coverage of modeling, careful attention is devoted to the assessment of critical and erosion velocities. In order to reveal the phenomenological characteristics of interface bonding, severe, localized plastic deformation and material jet formation are studied. Detailed consideration is also given to the effect of macroscopic defects such as interparticle boundaries and subsequent splat boundary cracking on the mechanical behavior of cold spray coatings. The discussion of applications focuses in particular on the repair of damaged parts and additive manufacturing in various disciplines from aerospace to biomedical engineering. Key aspects include a systematic study of defect shape and the ability of cold spray to fill the defect, examination of the fatigue behavior of coatings for structur...

  6. Knowledge gobernanza model ah its applications in OTRIS. Two cases

    International Nuclear Information System (INIS)

    Bueno Campos, E.; Plaz Landela, R.; Albert Berenguer, J.

    2007-01-01

    The importance of I+D and knowledge transfer in European economies and in Technology and Innovation in Spain, is a key issue to achieve the Europe target for 2010 to become the European society of knowledge for growth. This article shows, with certain detail, the structure and functions of the MTT model used as a reference of processes needed to fulfil the mission of an OTRI and its function of knowledge transfer.Two concrete applications show the effectiveness and functionality of the model; CARTA and PRISMA applications are case studies of the MTT implementation process. They represent a first step through new developments that are being carried out in other OTRIS. (Author) 35 refs

  7. On Helical Projection and Its Application in Screw Modeling

    Directory of Open Access Journals (Sweden)

    Riliang Liu

    2014-04-01

    Full Text Available As helical surfaces, in their many and varied forms, are finding more and more applications in engineering, new approaches to their efficient design and manufacture are desired. To that end, the helical projection method that uses curvilinear projection lines to map a space object to a plane is examined in this paper, focusing on its mathematical model and characteristics in terms of graphical representation of helical objects. A number of interesting projective properties are identified in regard to straight lines, curves, and planes, and then the method is further investigated with respect to screws. The result shows that the helical projection of a cylindrical screw turns out to be a Jordan curve, which is determined by the screw's axial profile and number of flights. Based on the projection theory, a practical approach to the modeling of screws and helical surfaces is proposed and illustrated with examples, and its possible application in screw manufacturing is discussed.

  8. Modeling Types of Pedal Applications Using a Driving Simulator.

    Science.gov (United States)

    Wu, Yuqing; Boyle, Linda Ng; McGehee, Daniel; Roe, Cheryl A; Ebe, Kazutoshi; Foley, James

    2015-11-01

    The aim of this study was to examine variations in drivers' foot behavior and identify factors associated with pedal misapplications. Few studies have focused on the foot behavior while in the vehicle and the mishaps that a driver can encounter during a potentially hazardous situation. A driving simulation study was used to understand how drivers move their right foot toward the pedals. The study included data from 43 drivers as they responded to a series of rapid traffic signal phase changes. Pedal application types were classified as (a) direct hit, (b) hesitated, (c) corrected trajectory, and (d) pedal errors (incorrect trajectories, misses, slips, or pressed both pedals). A mixed-effects multinomial logit model was used to predict the likelihood of one of these pedal applications, and linear mixed models with repeated measures were used to examine the response time and pedal duration given the various experimental conditions (stimuli color and location). Younger drivers had higher probabilities of direct hits when compared to other age groups. Participants tended to have more pedal errors when responding to a red signal or when the signal appeared to be closer. Traffic signal phases and locations were associated with pedal response time and duration. The response time and pedal duration affected the likelihood of being in one of the four pedal application types. Findings from this study suggest that age-related and situational factors may play a role in pedal errors, and the stimuli locations could affect the type of pedal application. © 2015, Human Factors and Ergonomics Society.

  9. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  10. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  11. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Science.gov (United States)

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  12. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  13. A sample application of nuclear power human resources model

    International Nuclear Information System (INIS)

    Gurgen, A.; Ergun, S.

    2016-01-01

    One of the most important issues for a new comer country initializing the nuclear power plant projects is to have both quantitative and qualitative models for the human resources development. For the quantitative model of human resources development for Turkey, “Nuclear Power Human Resources (NPHR) Model” developed by the Los Alamos National Laboratory was used to determine the number of people that will be required from different professional or occupational fields in the planning of human resources for Akkuyu, Sinop and the third nuclear power plant projects. The number of people required for different professions for the Nuclear Energy Project Implementation Department, the regulatory authority, project companies, construction, nuclear power plants and the academy were calculated. In this study, a sample application of the human resources model is presented. The results of the first tries to calculate the human resources needs of Turkey were obtained. Keywords: Human Resources Development, New Comer Country, NPHR Model

  14. Mathematical modeling for surface hardness in investment casting applications

    International Nuclear Information System (INIS)

    Singh, Rupinder

    2012-01-01

    Investment casting (IC) has many potential engineering applications. Not much work hitherto has been reported for modeling the surface hardness (SH) in IC of industrial components. In the present study, outcome of Taguchi based macro model has been used for developing a mathematical model for SH; using Buckingham's π theorem. Three input parameters namely volume/surface area (V/A) ratio of cast components, slurry layer's combination (LC) and molten metal pouring temperature were selected to give output in form of SH. This study will provide main effects of these variables on SH and will shed light on the SH mechanism in IC. The comparison with experimental results will also serve as further validation of model

  15. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  16. Overview of the EPRI CONTRACTMIX model for natural gas applications

    International Nuclear Information System (INIS)

    1993-09-01

    The Contract Mix Model (CONTRACTMIX) is designed to assist gas supply planners in analyzing the costs and risks of alternative supply strategies. By explicitly incorporating uncertainty about gas demand and market conditions into the analysis, the methodology permits the analyst to compare contracting strategies on the basis of cost and risk and to assess the value of flexible strategies and contracts. The model is applicable to purchase decisions for natural gas and other fuels. CONTRACTMIX may be used at all phases of supply decision-making, from broad strategy formulation to detailed contract design and evaluation. This document introduces the prospective user to the model's capability for analysis of gas supply contracting decisions. The document describes the types of problems CONTRACTMIX is designed to address as well as the model's structure, inputs, outputs, and unique features

  17. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  18. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  19. Development and application of new quality model for software projects.

    Science.gov (United States)

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  20. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  1. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  2. Modeling of bubble dynamics in relation to medical applications

    Energy Technology Data Exchange (ETDEWEB)

    Amendt, P.A.; London, R.A. [Lawrence Livermore National Lab., CA (United States); Strauss, M. [California Univ., Davis, CA (United States)]|[Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev] [and others

    1997-03-12

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs.

  3. Modeling of bubble dynamics in relation to medical applications

    International Nuclear Information System (INIS)

    Amendt, P.A.; London, R.A.; Strauss, M.; Israel Atomic Energy Commission, Beersheba

    1997-01-01

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs

  4. Defined Contribution Model: Definition, Theory and an Application for Turkey

    OpenAIRE

    Metin Ercen; Deniz Gokce

    1998-01-01

    Based on a numerical application that employs social and economic parameters of the Turkish economy, this study attempts to demonstrate that the current collapse in the Turkish social security system is not unavoidable. The present social security system in Turkey is based on the defined benefit model of pension provision. On the other hand, recent proposals for reform in the social security system are based on a multipillar system, where one of the alternatives is a defined contribution pens...

  5. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  6. Generalized Bogoliubov Polariton Model: An Application to Stock Exchange Market

    International Nuclear Information System (INIS)

    Anh, Chu Thuy; Anh, Truong Thi Ngoc; Lan, Nguyen Tri; Viet, Nguyen Ai

    2016-01-01

    A generalized Bogoliubov method for investigation non-simple and complex systems was developed. We take two branch polariton Hamiltonian model in second quantization representation and replace the energies of quasi-particles by two distribution functions of research objects. Application to stock exchange market was taken as an example, where the changing the form of return distribution functions from Boltzmann-like to Gaussian-like was studied. (paper)

  7. Modelling in waters geochemistry. Concepts and applications in environment

    International Nuclear Information System (INIS)

    Windt, L. de; Lee, J.V.D.; Schmitt, J.M.

    2005-01-01

    The aim of this work is to give the main point of the physico-chemical concepts and of the mathematical laws on which are based the geochemical modelling of waters, while presenting concrete and typical applications examples to the problems of environment and of water resources management. In a table (Doc. AF 6530) are gathered the distribution sources of softwares and of thermodynamic data banks. (O.M.)

  8. Sensors advancements in modeling, design issues, fabrication and practical applications

    CERN Document Server

    Mukhopadhyay, Subhash Chandra

    2008-01-01

    Sensors are the most important component in any system and engineers in any field need to understand the fundamentals of how these components work, how to select them properly and how to integrate them into an overall system. This book has outlined the fundamentals, analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors, electromagnetic, capacitive, ultrasonic, vision, Terahertz, displacement, fibre-optic and so on. The book: addresses the identification, modeling, selection, operation and integration of a wide variety of se

  9. Synthesis of industrial applications of local approach to fracture models

    International Nuclear Information System (INIS)

    Eripret, C.

    1993-03-01

    This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs

  10. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  11. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  12. Recent Advances in Material and Geometrical Modelling in Dental Applications

    Directory of Open Access Journals (Sweden)

    Waleed M. S. Al Qahtani

    2018-06-01

    Full Text Available This article touched, in brief, the recent advances in dental materials and geometric modelling in dental applications. Most common categories of dental materials as metallic alloys, composites, ceramics and nanomaterials were briefly demonstrated. Nanotechnology improved the quality of dental biomaterials. This new technology improves many existing materials properties, also, to introduce new materials with superior properties that covered a wide range of applications in dentistry. Geometric modelling was discussed as a concept and examples within this article. The geometric modelling with engineering Computer-Aided-Design (CAD system(s is highly satisfactory for further analysis or Computer-Aided-Manufacturing (CAM processes. The geometric modelling extracted from Computed-Tomography (CT images (or its similar techniques for the sake of CAM also reached a sufficient level of accuracy, while, obtaining efficient solid modelling without huge efforts on body surfaces, faces, and gaps healing is still doubtable. This article is merely a compilation of knowledge learned from lectures, workshops, books, and journal articles, articles from the internet, dental forum, and scientific groups' discussions.

  13. Model oriented application generation for industrial control systems

    International Nuclear Information System (INIS)

    Copy, B.; Barillere, R.; Blanco, E.; Fernandez Adiego, B.; Nogueira Fernandes, R.; Prieto Barreiro, I.

    2012-01-01

    The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology and a collection of development tools that standardizes the design of industrial control applications. A Software Factory, named the UNICOS Application Builder (UAB), was introduced to ease extensibility and maintenance of the framework, introducing a stable meta-model, a set of platform-independent models and platform-specific configurations against which code generation plug-ins and configuration generation plug-ins can be written. Such plug-ins currently target PLC programming environments (Schneider and SIEMENS PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UNICOS meta-model and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be used to generate both code and configuration for a variety of target usages. (authors)

  14. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  15. Dosimetric applications of the new ICRP lung model

    International Nuclear Information System (INIS)

    James, A.C.

    1994-06-01

    The International Commission on Radiological Protection (ICRP) has adopted a new dosimetric model of the human respiratory tract, to be issued as ICRP Publication 66. This chapter presents a summary of the main measures of the new model. The model is a general update of that in Publication 30, but is significantly broader in scope. It applies explicitly to workers and all members of the public: for inhalation of particles, gases and vapors; evaluation of dose per unit intake or exposure; and interpretation of bioassay data. The approach is fundamentally different from the Publication 30 model which calculates only the average dose to the lungs. The new model takes account of differences in radiosensitivity of respiratory tract tissues, and the wide range of doses they may receive, and calculates specific tissue doses. The model readily incorporates specific information related to the subject (age, physical activity, smoking or health status) or the exposure (aerosol size and chemical form). The application of the new model to calculate equivalent lung dose and effective dose per unit intake is illustrated for several α- and ∂-emitting radionuclides, and the new values obtained are compared with those given by the ICRP Publication 30 lung model

  16. Integrated climate modelling at the Kiel Institute for World Economics: The DART Model and its applications

    OpenAIRE

    Deke, Oliver; Peterson, Sonja

    2003-01-01

    The aim of this paper is to give an overview over the DART model and its applications. The main focus is on the implementation of climate impacts into DART in the course of coupling DART to the ocean-atmosphere model and on the associated empirical problems. The basic DART model and some applications are presented in the next section. Section 3 describes in detail how the economic impacts of climate change on the agricultural sector and the impact of sea level rise are implemented in DART. Se...

  17. Modeling and identification of induction micromachines in microelectromechanical systems applications

    Energy Technology Data Exchange (ETDEWEB)

    Lyshevski, S.E. [Purdue University at Indianapolis (United States). Dept. of Electrical and Computer Engineering

    2002-11-01

    Microelectromechanical systems (MEMS), which integrate motion microstructures, radiating energy microdevices, controlling and signal processing integrated circuits (ICs), are widely used. Rotational and translational electromagnetic based micromachines are used in MEMS as actuators and sensors. Brushless high performance micromachines are the preferable choice in different MEMS applications, and therefore, synchronous and induction micromachines are the best candidates. Affordability, good performance characteristics (efficiency, controllability, robustness, reliability, power and torque densities etc.) and expanded operating envelopes result in a strong interest in the application of induction micromachines. In addition, induction micromachines can be easily fabricated using surface micromachining and high aspect ratio fabrication technologies. Thus, it is anticipated that induction micromachines, controlled using different control algorithms implemented using ICs, will be widely used in MEMS. Controllers can be implemented using specifically designed ICs to attain superior performance, maximize efficiency and controllability, minimize losses and electromagnetic interference, reduce noise and vibration, etc. In order to design controllers, the induction micromachine must be modeled, and its mathematical model parameters must be identified. Using microelectromechanics, nonlinear mathematical models are derived. This paper illustrates the application of nonlinear identification methods as applied to identify the unknown parameters of three phase induction micromachines. Two identification methods are studied. In particular, nonlinear error mapping technique and least squares identification are researched. Analytical and numerical results, as well as practical capabilities and effectiveness, are illustrated, identifying the unknown parameters of a three phase brushless induction micromotor. Experimental results fully support the identification methods. (author)

  18. Research on mixed network architecture collaborative application model

    Science.gov (United States)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  19. 2D modelling and its applications in engineering

    International Nuclear Information System (INIS)

    Altinbalik, M. Tahir; İRSEL, Gürkan

    2013-01-01

    A model, in computer aided engineering applications, may be created by either using a two- dimensional or a three-dimensional design depending on the purpose of design. What matters most in this regard is the selection of a right method to meet system solution requirements in the most economical way. Manufacturability of a design that is developed by utilising computer aided engineering is important, but usability of the data obtained in the course of design works in the production is also equally important. In the applications consisting of such production operations as CNC or plasma cutting, two-dimensional designs can be directly used in production. These machines are equipped with interfaces which converts two-dimensional drawings into codes. In this way, a design can be directly transferred to production, and any arrangements during production process can be synchronously evaluated. As a result of this, investment expenses will be lowered, and thus the costs can be reduced to some extent. In the presented study, we have studied two-dimensional design applications and requirements. We created a two-dimensional design for a part for which a three-dimensional model have previously been generated, and then, we transferred this design to plasma cutting machine, and thus, the operation has been realized experimentally. Key words: Plasma Cutting, 2D modelling, flexibility

  20. 2D modelling and its applications in engineering

    Energy Technology Data Exchange (ETDEWEB)

    Altinbalik, M. Tahir; İRSEL, Gürkan [Trakya University, Faculty of Engineering and Architecture Mechanical Engineering Department, Edİrne (Turkey)

    2013-07-01

    A model, in computer aided engineering applications, may be created by either using a two- dimensional or a three-dimensional design depending on the purpose of design. What matters most in this regard is the selection of a right method to meet system solution requirements in the most economical way. Manufacturability of a design that is developed by utilising computer aided engineering is important, but usability of the data obtained in the course of design works in the production is also equally important. In the applications consisting of such production operations as CNC or plasma cutting, two-dimensional designs can be directly used in production. These machines are equipped with interfaces which converts two-dimensional drawings into codes. In this way, a design can be directly transferred to production, and any arrangements during production process can be synchronously evaluated. As a result of this, investment expenses will be lowered, and thus the costs can be reduced to some extent. In the presented study, we have studied two-dimensional design applications and requirements. We created a two-dimensional design for a part for which a three-dimensional model have previously been generated, and then, we transferred this design to plasma cutting machine, and thus, the operation has been realized experimentally. Key words: Plasma Cutting, 2D modelling, flexibility.

  1. Com aplicar les proves paramètriques bivariades t de Student i ANOVA en SPSS. Cas pràctic

    Directory of Open Access Journals (Sweden)

    María-José Rubio-Hurtado

    2012-07-01

    Full Text Available Les proves paramètriques són un tipus de proves de significació estadística que quantifiquen l'associació o independència entre una variable quantitativa i una categòrica. Les proves paramètriques són exigents amb certs requisits previs per a la seva aplicació: la distribució Normal de la variable quantitativa en els grups que es comparen, l'homogeneïtat de variàncies en les poblacions de les quals procedeixen els grups i una n mostral no inferior a 30. El seu no compliment comporta la necessitat de recórrer a proves estadístiques no paramètriques. Les proves paramètriques es classifiquen en dos: prova t (per a una mostra o per a dues mostres relacionades o independents i prova ANOVA (per a més de dues mostres independents.

  2. Applications of the k – ω Model in Stellar Evolutionary Models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yan, E-mail: ly@ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216 (China)

    2017-05-20

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynolds stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.

  3. Computer models for fading channels with applications to digital transmission

    Science.gov (United States)

    Loo, Chun; Secord, Norman

    1991-11-01

    The authors describe computer models for Rayleigh, Rician, log-normal, and land-mobile-satellite fading channels. All computer models for the fading channels are based on the manipulation of a white Gaussian random process. This process is approximated by a sum of sinusoids with random phase angle. These models compare very well with analytical models in terms of their probability distribution of envelope and phase of the fading signal. For the land mobile satellite fading channel, results of level crossing rate and average fade duration are given. These results show that the computer models can provide a good coarse estimate of the time statistic of the faded signal. Also, for the land-mobile-satellite fading channel, the results show that a 3-pole Butterworth shaping filter should be used with the model. An example of the application of the land-mobile-satellite fading-channel model to predict the performance of a differential phase-shift keying signal is described.

  4. New Trends in Model Coupling Theory, Numerics and Applications

    International Nuclear Information System (INIS)

    Coquel, F.; Godlewski, E.; Herard, J. M.; Segre, J.

    2010-01-01

    This special issue comprises selected papers from the workshop New Trends in Model Coupling, Theory, Numerics and Applications (NTMC'09) which took place in Paris, September 2 - 4, 2009. The research of optimal technological solutions in a large amount of industrial systems requires to perform numerical simulations of complex phenomena which are often characterized by the coupling of models related to various space and/or time scales. Thus, the so-called multi-scale modelling has been a thriving scientific activity which connects applied mathematics and other disciplines such as physics, chemistry, biology or even social sciences. To illustrate the variety of fields concerned by the natural occurrence of model coupling we may quote: meteorology where it is required to take into account several turbulence scales or the interaction between oceans and atmosphere, but also regional models in a global description, solid mechanics where a thorough understanding of complex phenomena such as propagation of cracks needs to couple various models from the atomistic level to the macroscopic level; plasma physics for fusion energy for instance where dense plasmas and collisionless plasma coexist; multiphase fluid dynamics when several types of flow corresponding to several types of models are present simultaneously in complex circuits; social behaviour analysis with interaction between individual actions and collective behaviour. (authors)

  5. AGR core models and their application to HTRs and RBMKs

    International Nuclear Information System (INIS)

    Baylis, Samuel

    2014-01-01

    EDF Energy operates 14 AGRs, commissioned between 1976 and 1989. The graphite moderators of these gas cooled reactors are subjected to a number of ageing processes under fast neutron irradiation in a high temperature CO2 environment. As the graphite ages, continued safe operation requires an advanced whole-core modeling capability to enable accurate assessments of the core’s ability to fulfil fundamental nuclear safety requirements. This is also essential in evaluating the reactor's remaining economic lifetime, and similar assessments are useful for HTRs in the design stage. A number of computational and physical models of AGR graphite cores have been developed or are in development, allowing simulation of the reactors in normal, fault and seismic conditions. Many of the techniques developed are applicable to other graphite moderated reactors. Modeling of the RBMK allows validation against a core in a more advanced state of ageing than the AGRs, while there is also an opportunity to adapt the models for high temperature reactors. As an example, a finite element model of the HTR-PM side reflector based on rigid bodies and nonlinear springs is developed, allowing rapid assessments of distortion in the structure to be made. A model of the RBMK moderator has also been produced using an established AGR code based on similar methods. In addition, this paper discusses the limitations of these techniques and the development of more complex core models that address these limitations, along with the lessons that can be applied to HTRs. (author)

  6. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  7. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Rahgoshay, M.; Hashemi-Tilehnoee, M.

    2012-01-01

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  8. Application of model based control to robotic manipulators

    Science.gov (United States)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1988-01-01

    A robot that can duplicate humam motion capabilities in such activities as balancing, reaching, lifting, and moving has been built and tested. These capabilities are achieved through the use of real time Model-Based Control (MBC) techniques which have recently been demonstrated. MBC accounts for all manipulator inertial forces and provides stable manipulator motion control even at high speeds. To effectively demonstrate the unique capabilities of MBC, an experimental robotic manipulator was constructed, which stands upright, balancing on a two wheel base. The mathematical modeling of dynamics inherent in MBC permit the control system to perform functions that are impossible with conventional non-model based methods. These capabilities include: (1) Stable control at all speeds of operation; (2) Operations requiring dynamic stability such as balancing; (3) Detection and monitoring of applied forces without the use of load sensors; (4) Manipulator safing via detection of abnormal loads. The full potential of MBC has yet to be realized. The experiments performed for this research are only an indication of the potential applications. MBC has no inherent stability limitations and its range of applicability is limited only by the attainable sampling rate, modeling accuracy, and sensor resolution. Manipulators could be designed to operate at the highest speed mechanically attainable without being limited by control inadequacies. Manipulators capable of operating many times faster than current machines would certainly increase productivity for many tasks.

  9. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  10. Towards Industrial Application of Damage Models for Sheet Metal Forming

    Science.gov (United States)

    Doig, M.; Roll, K.

    2011-05-01

    Due to global warming and financial situation the demand to reduce the CO2-emission and the production costs leads to the permanent development of new materials. In the automotive industry the occupant safety is an additional condition. Bringing these arguments together the preferable approach for lightweight design of car components, especially for body-in-white, is the use of modern steels. Such steel grades, also called advanced high strength steels (AHSS), exhibit a high strength as well as a high formability. Not only their material behavior but also the damage behavior of AHSS is different compared to the performances of standard steels. Conventional methods for the damage prediction in the industry like the forming limit curve (FLC) are not reliable for AHSS. Physically based damage models are often used in crash and bulk forming simulations. The still open question is the industrial application of these models for sheet metal forming. This paper evaluates the Gurson-Tvergaard-Needleman (GTN) model and the model of Lemaitre within commercial codes with a goal of industrial application.

  11. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  12. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  13. Clinical application of the five-factor model.

    Science.gov (United States)

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning. © 2012 Wiley Periodicals, Inc.

  14. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  15. Application of a leakage model to assess exfiltration from sewers.

    Science.gov (United States)

    Karpf, C; Krebs, P

    2005-01-01

    The exfiltration of wastewater from sewer systems in urban areas causes a deterioration of soil and possibly groundwater quality. Beside the simulation of transport and degradation processes in the unsaturated zone and in the aquifer the analysis of the potential impact requires the estimation of quantity and temporal variation of wastewater exfiltration. Exfiltration can be assessed by the application of a leakage model. The hydrological approach was originally developed to simulate the interactions between the groundwater and surface water, it was adapted to allow for modelling of interactions between groundwater and sewer system. In order to approximate the exfiltration specific model parameters infiltration specific parameters were used as a basis. Scenario analysis of the exfiltration in the City of Dresden from 1997 to 1999 and during the flood event in August 2002 shows the variation and the extent of exfiltration rates.

  16. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  17. The Logistic Maturity Model: Application to a Fashion Company

    Directory of Open Access Journals (Sweden)

    Claudia Battista

    2013-08-01

    Full Text Available This paper describes the structure of the logistic maturity model (LMM in detail and shows the possible improvements that can be achieved by using this model in terms of the identification of the most appropriate actions to be taken in order to increase the performance of the logistics processes in industrial companies. The paper also gives an example of the LMM’s application to a famous Italian female fashion firm, which decided to use the model as a guideline for the optimization of its supply chain. Relying on a 5-level maturity staircase, specific achievement indicators as well as key performance indicators and best practices are defined and related to each logistics area/process/sub-process, allowing any user to easily and rapidly understand the more critical logistical issues in terms of process immaturity.

  18. On the application of copula in modeling maintenance contract

    International Nuclear Information System (INIS)

    Iskandar, B P; Husniah, H

    2016-01-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation. (paper)

  19. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    Science.gov (United States)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  20. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  1. A review of visual MODFLOW applications in groundwater modelling

    Science.gov (United States)

    Hariharan, V.; Shankar, M. Uma

    2017-11-01

    Visual MODLOW is a Graphical User Interface for the USGS MODFLOW. It is a commercial software that is popular among the hydrogeologists for its user-friendly features. The software is mainly used for Groundwater flow and contaminant transport models under different conditions. This article is intended to review the versatility of its applications in groundwater modelling for the last 22 years. Agriculture, airfields, constructed wetlands, climate change, drought studies, Environmental Impact Assessment (EIA), landfills, mining operations, river and flood plain monitoring, salt water intrusion, soil profile surveys, watershed analyses, etc., are the areas where the software has been reportedly used till the current date. The review will provide a clarity on the scope of the software in groundwater modelling and research.

  2. Polycrystalline CVD diamond device level modeling for particle detection applications

    Science.gov (United States)

    Morozzi, A.; Passeri, D.; Kanxheri, K.; Servoli, L.; Lagomarsino, S.; Sciortino, S.

    2016-12-01

    Diamond is a promising material whose excellent physical properties foster its use for radiation detection applications, in particular in those hostile operating environments where the silicon-based detectors behavior is limited due to the high radiation fluence. Within this framework, the application of Technology Computer Aided Design (TCAD) simulation tools is highly envisaged for the study, the optimization and the predictive analysis of sensing devices. Since the novelty of using diamond in electronics, this material is not included in the library of commercial, state-of-the-art TCAD software tools. In this work, we propose the development, the application and the validation of numerical models to simulate the electrical behavior of polycrystalline (pc)CVD diamond conceived for diamond sensors for particle detection. The model focuses on the characterization of a physically-based pcCVD diamond bandgap taking into account deep-level defects acting as recombination centers and/or trap states. While a definite picture of the polycrystalline diamond band-gap is still debated, the effect of the main parameters (e.g. trap densities, capture cross-sections, etc.) can be deeply investigated thanks to the simulated approach. The charge collection efficiency due to β -particle irradiation of diamond materials provided by different vendors and with different electrode configurations has been selected as figure of merit for the model validation. The good agreement between measurements and simulation findings, keeping the traps density as the only one fitting parameter, assesses the suitability of the TCAD modeling approach as a predictive tool for the design and the optimization of diamond-based radiation detectors.

  3. Applications of the SWAT Model Special Section: Overview and Insights.

    Science.gov (United States)

    Gassman, Philip W; Sadeghi, Ali M; Srinivasan, Raghavan

    2014-01-01

    The Soil and Water Assessment Tool (SWAT) model has emerged as one of the most widely used water quality watershed- and river basin-scale models worldwide, applied extensively for a broad range of hydrologic and/or environmental problems. The international use of SWAT can be attributed to its flexibility in addressing water resource problems, extensive networking via dozens of training workshops and the several international conferences that have been held during the past decade, comprehensive online documentation and supporting software, and an open source code that can be adapted by model users for specific application needs. The catalyst for this special collection of papers was the 2011 International SWAT Conference & Workshops held in Toledo, Spain, which featured over 160 scientific presentations representing SWAT applications in 37 countries. This special collection presents 22 specific SWAT-related studies, most of which were presented at the 2011 SWAT Conference; it represents SWAT applications on five different continents, with the majority of studies being conducted in Europe and North America. The papers cover a variety of topics, including hydrologic testing at a wide range of watershed scales, transport of pollutants in northern European lowland watersheds, data input and routing method effects on sediment transport, development and testing of potential new model algorithms, and description and testing of supporting software. In this introduction to the special section, we provide a synthesis of these studies within four main categories: (i) hydrologic foundations, (ii) sediment transport and routing analyses, (iii) nutrient and pesticide transport, and (iv) scenario analyses. We conclude with a brief summary of key SWAT research and development needs. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Polycrystalline CVD diamond device level modeling for particle detection applications

    International Nuclear Information System (INIS)

    Morozzi, A.; Passeri, D.; Kanxheri, K.; Servoli, L.; Lagomarsino, S.; Sciortino, S.

    2016-01-01

    Diamond is a promising material whose excellent physical properties foster its use for radiation detection applications, in particular in those hostile operating environments where the silicon-based detectors behavior is limited due to the high radiation fluence. Within this framework, the application of Technology Computer Aided Design (TCAD) simulation tools is highly envisaged for the study, the optimization and the predictive analysis of sensing devices. Since the novelty of using diamond in electronics, this material is not included in the library of commercial, state-of-the-art TCAD software tools. In this work, we propose the development, the application and the validation of numerical models to simulate the electrical behavior of polycrystalline (pc)CVD diamond conceived for diamond sensors for particle detection. The model focuses on the characterization of a physically-based pcCVD diamond bandgap taking into account deep-level defects acting as recombination centers and/or trap states. While a definite picture of the polycrystalline diamond band-gap is still debated, the effect of the main parameters (e.g. trap densities, capture cross-sections, etc.) can be deeply investigated thanks to the simulated approach. The charge collection efficiency due to β -particle irradiation of diamond materials provided by different vendors and with different electrode configurations has been selected as figure of merit for the model validation. The good agreement between measurements and simulation findings, keeping the traps density as the only one fitting parameter, assesses the suitability of the TCAD modeling approach as a predictive tool for the design and the optimization of diamond-based radiation detectors.

  5. Application of declarative modeling approaches for external events

    International Nuclear Information System (INIS)

    Anoba, R.C.

    2005-01-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at Nuclear Power Plants. Since the issuance of Generic Letter 88-20 and subsequent IPE/IPEEE assessments, the NRC has issued several Regulatory Guides such as RG 1.174 to describe the use of PSA in risk-informed regulation activities. Most PSA have the capability to address internal events including internal floods. As the more demands are being placed for using the PSA to support risk-informed applications, there has been a growing need to integrate other eternal events (Seismic, Fire, etc.) into the logic models. Most external events involve spatial dependencies and usually impact the logic models at the component level. Therefore, manual insertion of external events impacts into a complex integrated fault tree model may be too cumbersome for routine uses of the PSA. Within the past year, a declarative modeling approach has been developed to automate the injection of external events into the PSA. The intent of this paper is to introduce the concept of declarative modeling in the context of external event applications. A declarative modeling approach involves the definition of rules for injection of external event impacts into the fault tree logic. A software tool such as the EPRI's XInit program can be used to interpret the pre-defined rules and automatically inject external event elements into the PSA. The injection process can easily be repeated, as required, to address plant changes, sensitivity issues, changes in boundary conditions, etc. External event elements may include fire initiating events, seismic initiating events, seismic fragilities, fire-induced hot short events, special human failure events, etc. This approach has been applied at a number of US nuclear power plants including a nuclear power plant in Romania. (authors)

  6. Modelling of Argon Cold Atmospheric Plasmas for Biomedical Applications

    Science.gov (United States)

    Atanasova, M.; Benova, E.; Degrez, G.; van der Mullen, J. A. M.

    2018-02-01

    Plasmas for biomedical applications are one of the newest fields of plasma utilization. Especially high is the interest toward plasma usage in medicine. Promising results are achieved in blood coagulation, wound healing, treatment of some forms of cancer, diabetic complications, etc. However, the investigations of the biomedical applications from biological and medical viewpoint are much more advanced than the studies on the dynamics of the plasma. In this work we aim to address some specific challenges in the field of plasma modelling, arising from biomedical applications - what are the plasma reactive species’ and electrical fields’ spatial distributions as well as their production mechanisms; what are the fluxes and energies of the various components of the plasma delivers to the treated surfaces; what is the gas flow pattern? The focus is on two devices, namely the capacitive coupled plasma jet and the microwave surface wave sustained discharge. The devices are representatives of the so called cold atmospheric plasmas (CAPs). These are discharges characterized by low gas temperature - less than 40°C at the point of application - and non-equilibrium chemistry.

  7. Current developments in soil organic matter modeling and the expansion of model applications: a review

    International Nuclear Information System (INIS)

    Campbell, Eleanor E; Paustian, Keith

    2015-01-01

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions. (topical review)

  8. Optimization of friction welding by taguchi and ANOVA method on commercial aluminium tube to Al 2025 tube plate with backing block using an external tool

    International Nuclear Information System (INIS)

    Kanna, S.; Kumaraswamidhs, L. A.; Kumaran, S. Senthil

    2016-01-01

    The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.

  9. Optimization of friction welding by taguchi and ANOVA method on commercial aluminium tube to Al 2025 tube plate with backing block using an external tool

    Energy Technology Data Exchange (ETDEWEB)

    Kanna, S.; Kumaraswamidhs, L. A. [Indian Institute of Technology, Dhanbad (India); Kumaran, S. Senthil [RVS School of Engineering and Technology, Dindigul (India)

    2016-05-15

    The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.

  10. Stochastic linear hybrid systems: Modeling, estimation, and application

    Science.gov (United States)

    Seah, Chze Eng

    Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS

  11. Temperature modulated differential scanning calorimetry. Modelling and applications

    International Nuclear Information System (INIS)

    Jiang, Z.

    2000-01-01

    The research focused on the TMDSC technique with respect to both theoretical problems and applications. The modelling has been performed to address the effects of heat transfer on the quantitative measurement of TMDSC experiments. A procedure has been suggested to correct the effect on the phase angle obtained by dynamic TMDSC. The effects under quasi-isothermal conditions have been investigated using improved models in terms of various heat transfer interface qualities, sample properties and sensor properties. The contributions of the sensor's properties to the heat transfer are, for the first time, separated from the overall effects. All the modelling results are compared with the corresponding experimental data and they are in good agreement. Ripples and fluctuations on the experimental signals during some transitions have been simulated using a simple model, and then, been shown to be artefacts of the Fourier transformation process. The applications of TMDSC to both research and commercial samples are reported in terms of differing either the experimental conditions or the thermal history of the sample for the studies of the glass transition, cold crystallisation, the melting transition, the clearing transition of a liquid crystal polymer, and the vitrification of an epoxy resin under quasi-isothermal conditions. The results show that the interpretations of some quantities obtained by TMDSC to some physical transitions still need to be clarified by further work. The applications also show the abilities of TMDSC for combining the sensitivity of a measurement at high instantaneous heating with the resolution obtained by measuring at a low underlying heating and measuring the heat capacity of the sample and its variation under the quasi-isothermal conditions. The frequency dependent complex heat capacity during the glass transition provides a window to measure the apparent activation energy of the transition, which is different from the window used by conventional

  12. Technology, applications and modelling of ohmic heating: a review.

    Science.gov (United States)

    Varghese, K Shiby; Pandey, M C; Radhakrishna, K; Bawa, A S

    2014-10-01

    Ohmic heating or Joule heating has immense potential for achieving rapid and uniform heating in foods, providing microbiologically safe and high quality foods. This review discusses the technology behind ohmic heating, the current applications and thermal modeling of the process. The success of ohmic heating depends on the rate of heat generation in the system, the electrical conductivity of the food, electrical field strength, residence time and the method by which the food flows through the system. Ohmic heating is appropriate for processing of particulate and protein rich foods. A vast amount of work is still necessary to understand food properties in order to refine system design and maximize performance of this technology in the field of packaged foods and space food product development. Various economic studies will also play an important role in understanding the overall cost and viability of commercial application of this technology in food processing. Some of the demerits of the technology are also discussed.

  13. Rectangular amplitudes, conformal blocks, and applications to loop models

    Energy Technology Data Exchange (ETDEWEB)

    Bondesan, Roberto, E-mail: roberto.bondesan@cea.fr [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Jacobsen, Jesper L. [LPTENS, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris (France); Saleur, Hubert [Institute de Physique Theorique, CEA Saclay, F-91191 Gif-sur-Yvette (France); Physics Department, USC, Los Angeles, CA 90089-0484 (United States)

    2013-02-21

    In this paper we continue the investigation of partition functions of critical systems on a rectangle initiated in [R. Bondesan, et al., Nucl. Phys. B 862 (2012) 553-575]. Here we develop a general formalism of rectangle boundary states using conformal field theory, adapted to describe geometries supporting different boundary conditions. We discuss the computation of rectangular amplitudes and their modular properties, presenting explicit results for the case of free theories. In a second part of the paper we focus on applications to loop models, discussing in details lattice discretizations using both numerical and analytical calculations. These results allow to interpret geometrically conformal blocks, and as an application we derive new probability formulas for self-avoiding walks.

  14. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  15. Application of product modelling - seen from a work preparation viewpoint

    DEFF Research Database (Denmark)

    Hvam, Lars

    and methods, as only a minor part of the engineering work in these functions in the planning system until now has been supported with IT. The aim is to develop methods for analysing which activities to support with IT, and in relation to this, define context and structure of the IT-systems to support......, over building a model, and to the final programming of an application. It has been stressed out to carry out all the phases in the outline of procedure in the empirical work, one of the reasons being to prove that it is possible, with a reasonable consumption of resources, to build an application......Manufacturing companies spends an increasing amount of the total work resources in the manufacturing planning system with the activities of e.g. specifying products and methods, scheduling, procurement etc. By this the potential for obtaining increased productivity moves from the direct costs...

  16. A Global Modeling Framework for Plasma Kinetics: Development and Applications

    Science.gov (United States)

    Parsey, Guy Morland

    The modern study of plasmas, and applications thereof, has developed synchronously with com- puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many- body, systems have resulted in the development of multiple simulation methods (particle-in-cell, fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomes of plasma applications. Recognizing that different algorithms are chosen to best address specific topics of interest, this thesis centers around the development of an open-source global model frame- work for the focused study of non-equilibrium plasma kinetics. After verification and validation of the framework, it was used to study two physical phenomena: plasma-assisted combustion and the recently proposed optically-pumped rare gas metastable laser. Global models permeate chemistry and plasma science, relying on spatial averaging to focus attention on the dynamics of reaction networks. Defined by a set of species continuity and energy conservation equations, the required data and constructed systems are conceptually similar across most applications, providing a light platform for exploratory and result-search parameter scan- ning. Unfortunately, it is common practice for custom code to be developed for each application-- an enormous duplication of effort which negatively affects the quality of the software produced. Presented herein, the Python-based Kinetic Global Modeling framework (KGMf) was designed to support all modeling phases: collection and analysis of reaction data, construction of an exportable system of model ODEs, and a platform for interactive evaluation and post-processing analysis. A symbolic ODE system is constructed for interactive manipulation and generation of a Jacobian, both of which are compiled as operation-optimized C-code. Plasma-assisted combustion and ignition (PAC/PAI) embody the modernization of burning fuel by opening up new avenues of control and optimization

  17. Modeling Phosphorous Losses from Seasonal Manure Application Schemes

    Science.gov (United States)

    Menzies, E.; Walter, M. T.

    2015-12-01

    Excess nutrient loading, especially nitrogen and phosphorus, to surface waters is a common and significant problem throughout the United States. While pollution remediation efforts are continuously improving, the most effective treatment remains to limit the source. Appropriate timing of fertilizer application to reduce nutrient losses is currently a hotly debated topic in the Northeastern United States; winter spreading of manure is under special scrutiny. We plan to evaluate the loss of phosphorous to surface waters from agricultural systems under varying seasonal fertilization schemes in an effort to determine the impacts of fertilizers applied throughout the year. The Cayuga Lake basin, located in the Finger Lakes region of New York State, is a watershed dominated by agriculture where a wide array of land management strategies can be found. The evaluation will be conducted on the Fall Creek Watershed, a large sub basin in the Cayuga Lake Watershed. The Fall Creek Watershed covers approximately 33,000 ha in central New York State with approximately 50% of this land being used for agriculture. We plan to use the Soil and Water Assessment Tool (SWAT) to model a number of seasonal fertilization regimes such as summer only spreading and year round spreading (including winter applications), as well as others. We will use the model to quantify the phosphorous load to surface waters from these different fertilization schemes and determine the impacts of manure applied at different times throughout the year. More detailed knowledge about how seasonal fertilization schemes impact phosphorous losses will provide more information to stakeholders concerning the impacts of agriculture on surface water quality. Our results will help farmers and extensionists make more informed decisions about appropriate timing of manure application for reduced phosphorous losses and surface water degradation as well as aid law makers in improving policy surrounding manure application.

  18. Bilayer Graphene Application on NO2 Sensor Modelling

    Directory of Open Access Journals (Sweden)

    Elnaz Akbari

    2014-01-01

    Full Text Available Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using graphene to achieve accuracy and higher speed in detection devices utilized in cases where gas sensing is required. Although there are plenty of experimental studies in this field, the lack of analytical models is felt deeply. To start with modelling, the field effect transistor- (FET- based structure has been chosen to serve as the platform and bilayer graphene density of state variation effect by NO2 injection has been discussed. The chemical reaction between graphene and gas creates new carriers in graphene which cause density changes and eventually cause changes in the carrier velocity. In the presence of NO2 gas, electrons are donated to the FET channel which is employed as a sensing mechanism. In order to evaluate the accuracy of the proposed models, the results obtained are compared with the existing experimental data and acceptable agreement is reported.

  19. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  20. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  1. Application of Interval Predictor Models to Space Radiation Shielding

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  2. Multivariate Birnbaum-Saunders Distributions: Modelling and Applications

    Directory of Open Access Journals (Sweden)

    Robert G. Aykroyd

    2018-03-01

    Full Text Available Since its origins and numerous applications in material science, the Birnbaum–Saunders family of distributions has now found widespread uses in some areas of the applied sciences such as agriculture, environment and medicine, as well as in quality control, among others. It is able to model varied data behaviour and hence provides a flexible alternative to the most usual distributions. The family includes Birnbaum–Saunders and log-Birnbaum–Saunders distributions in univariate and multivariate versions. There are now well-developed methods for estimation and diagnostics that allow in-depth analyses. This paper gives a detailed review of existing methods and of relevant literature, introducing properties and theoretical results in a systematic way. To emphasise the range of suitable applications, full analyses are included of examples based on regression and diagnostics in material science, spatial data modelling in agricultural engineering and control charts for environmental monitoring. However, potential future uses in new areas such as business, economics, finance and insurance are also discussed. This work is presented to provide a full tool-kit of novel statistical models and methods to encourage other researchers to implement them in these new areas. It is expected that the methods will have the same positive impact in the new areas as they have had elsewhere.

  3. Materials modelling - a possible design tool for advanced nuclear applications

    International Nuclear Information System (INIS)

    Hoffelner, W.; Samaras, M.; Bako, B.; Iglesias, R.

    2008-01-01

    The design of components for power plants is usually based on codes, standards and design rules or code cases. However, it is very difficult to get the necessary experimental data to prove these lifetime assessment procedures for long-term applications in environments where complex damage interactions (temperature, stress, environment, irradiation) can occur. The rules used are often very simple and do not have a basis which take physical damage into consideration. The linear life fraction rule for creep and fatigue interaction can be taken as a prominent example. Materials modelling based on a multi-scale approach in principle provides a tool to convert microstructural findings into mechanical response and therefore has the capability of providing a set of tools for the improvement of design life assessments. The strength of current multi-scale modelling efforts is the insight they offer as regards experimental phenomena. To obtain an understanding of these phenomena it is import to focus on issues which are important at the various time and length scales of the modelling code. In this presentation the multi-scale path will be demonstrated with a few recent examples which focus on VHTR applications. (authors)

  4. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  5. Model-generated air quality statistics for application in vegetation response models in Alberta

    International Nuclear Information System (INIS)

    McVehil, G.E.; Nosal, M.

    1990-01-01

    To test and apply vegetation response models in Alberta, air pollution statistics representative of various parts of the Province are required. At this time, air quality monitoring data of the requisite accuracy and time resolution are not available for most parts of Alberta. Therefore, there exists a need to develop appropriate air quality statistics. The objectives of the work reported here were to determine the applicability of model generated air quality statistics and to develop by modelling, realistic and representative time series of hourly SO 2 concentrations that could be used to generate the statistics demanded by vegetation response models

  6. Modular coupling of transport and chemistry: theory and model applications

    International Nuclear Information System (INIS)

    Pfingsten, W.

    1994-06-01

    For the description of complex processes in the near-field of a radioactive waste repository, the coupling of transport and chemistry is necessary. A reason for the relatively minor use of coupled codes in this area is the high amount of computer time and storage capacity necessary for calculations by conventional codes, and lack of available data. The simple application of the sequentially coupled code MCOTAC, which couples one-dimensional advective, dispersive and diffusive transport with chemical equilibrium complexation and precipitation/dissolution reactions in a porous medium, shows some promising features with respect to applicability to relevant problems. Transport, described by a random walk of multi-species particles, and chemical equilibrium calculations are solved separately, coupled only by an exchange term to ensure mass conservation. The modular-structured code was applied to three problems: a) incongruent dissolution of hydrated silicate gels, b) dissolution of portlandite and c) calcite dissolution and hypothetical dolomite precipitation. This allows for a comparison with other codes and their applications. The incongruent dissolution of cement phases, important for degradation of cementitious materials in a repository, can be included in the model without the problems which occur with a directly coupled code. The handling of a sharp multi-mineral front system showed a much faster calculation time compared to a directly coupled code application. Altogether, the results are in good agreement with other code calculations. Hence, the chosen modular concept of MCOTAC is more open to an easy extension of the code to include additional processes like sorption, kinetically controlled processes, transport in two or three spatial dimensions, and adaptation to new developments in computing (hardware and software), an important factor for applicability. (author) figs., tabs., refs

  7. On the applicability of models for outdoor sound

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    not only sound pressure levels but also phase information. Such methods are, however, not always able to predict the sound field for more complicated scenarios involving terrain features, atmospheric wind and temperature gradients and turbulence. Another class of methods is based upon approximate theory......The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as Parabolic Equation methods, FFP methods and Boundary Element Methods all have advantages in certain situations. These approaches are accurate and predict...

  8. On the applicability of models for outdoor sound (A)

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    not only sound pressure levels but also phase information. Such methods are, however, not always able to predict the sound field for more complicated scenarios involving terrain features, atmospheric wind and temperature gradients, and turbulence. Another class of methods is based upon approximate theory......The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as parabolic equation methods, FFP methods, and boundary element methods all have advantages in certain situations. These approaches are accurate and predict...

  9. Application of a mathematical model for ergonomics in lean manufacturing.

    Science.gov (United States)

    Botti, Lucia; Mora, Cristina; Regattieri, Alberto

    2017-10-01

    The data presented in this article are related to the research article "Integrating ergonomics and lean manufacturing principles in a hybrid assembly line" (Botti et al., 2017) [1]. The results refer to the application of the mathematical model for the design of lean processes in hybrid assembly lines, meeting both the lean principles and the ergonomic requirements for safe assembly work. Data show that the success of a lean strategy is possible when ergonomics of workers is a parameter of the assembly process design.

  10. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

  11. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  12. Study about Markowitz Model Applicability on Romanian Stock Exchange Market

    Directory of Open Access Journals (Sweden)

    Leonardo Badea

    2006-11-01

    Full Text Available This paper deals with the application of the analysis on a portfolio made up of eight titles, through determination of the portfolio with absolute minimum variation and of the frontier of efficiency. Thus, using the Markowitz model, the dimensions of the portfolio with minimum absolute variation could be established but its profitability was smaller than the profitability offered by the BET rating. The best strategy to follow in this respect would have been to adopt a passive strategy and to take over the structure of the rating in the investment of portfolio for a good profitability but with a higher risk.

  13. Study about Markowitz Model Applicability on Romanian Stock Exchange Market

    Directory of Open Access Journals (Sweden)

    Leonardo Badea

    2006-09-01

    Full Text Available This paper deals with the application of the analysis on a portfolio made up of eight titles, through determination of the portfolio with absolute minimum variation and of the frontier of efficiency. Thus, using the Markowitz model, the dimensions of the portfolio with minimum absolute variation could be established but its profitability was smaller than the profitability offered by the BET rating. The best strategy to follow in this respect would have been to adopt a passive strategy and to take over the structure of the rating in the investment of portfolio for a good profitability but with a higher risk.

  14. Cognitive interference modeling with applications in power and admission control

    KAUST Repository

    Mahmood, Nurul Huda

    2012-10-01

    One of the key design challenges in a cognitive radio network is controlling the interference generated at coexisting primary receivers. In order to design efficient cognitive radio systems and to minimize their unwanted consequences, it is therefore necessary to effectively control the secondary interference at the primary receivers. In this paper, a generalized framework for the interference analysis of a cognitive radio network where the different secondary transmitters may transmit with different powers and transmission probabilities, is presented and various applications of this interference model are demonstrated. The findings of the analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2012 IEEE.

  15. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  16. Nonequilibrium thermodynamic models and applications to hydrogen plasma

    International Nuclear Information System (INIS)

    Cho, K.Y.

    1988-01-01

    A generalized multithermal equilibrium (GMTE) thermodynamic model is developed and presented with applications to hydrogen. A new chemical equilibrium equation for GMTE is obtained without the ensemble temperature concept, used by a previous MTE model. The effects of the GMTE model on the derivation and calculation of the thermodynamic, transport, and radiative properties are presented and significant differences from local thermal equilibrium (LTE) and two temperature model are discussed. When the electron translational temperature (T e ) is higher than the translational temperature of the heavy particles, the effects of hydrogen molecular species to the properties are significant at high T e compared with LTE results. The density variations of minor species are orders of magnitude with kinetic nonequilibrium at a constant electron temperature. A collisional-radiative model is also developed with the GMTE chemical equilibrium equation to study the effects of radiative transfer and the ambipolar diffusion on the population distribution of the excited atoms. The nonlocal radiative transfer effect is parameterized by an absorption factor, which is defined as a ratio of the absorbed intensity to the spontaneous emission coefficient

  17. Jet Noise Modeling for Supersonic Business Jet Application

    Science.gov (United States)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2004-01-01

    This document describes the development of an improved predictive model for coannular jet noise, including noise suppression modifications applicable to small supersonic-cruise aircraft such as the Supersonic Business Jet (SBJ), for NASA Langley Research Center (LaRC). For such aircraft a wide range of propulsion and integration options are under consideration. Thus there is a need for very versatile design tools, including a noise prediction model. The approach used is similar to that used with great success by the Modern Technologies Corporation (MTC) in developing a noise prediction model for two-dimensional mixer ejector (2DME) nozzles under the High Speed Research Program and in developing a more recent model for coannular nozzles over a wide range of conditions. If highly suppressed configurations are ultimately required, the 2DME model is expected to provide reasonable prediction for these smaller scales, although this has not been demonstrated. It is considered likely that more modest suppression approaches, such as dual stream nozzles featuring chevron or chute suppressors, perhaps in conjunction with inverted velocity profiles (IVP), will be sufficient for the SBJ.

  18. Some hybrid models applicable to dose-response relationships

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1992-01-01

    A new type of models of dose-response relationships has been studied as an initial stage to explore a reliable extrapolation of the relationships decided by high dose data to the range of low dose covered by radiation protection. The approach is to use a 'hybrid scale' of linear and logarithmic scales; the first model is that the normalized surviving fraction (ρ S > 0) in a hybrid scale decreases linearly with dose in a linear scale, and the second is that the induction in a log scale increases linearly with the normalized dose (τ D > 0) in a hybrid scale. The hybrid scale may reflect an overall effectiveness of a complex system against adverse events caused by various agents. Some data of leukemia in the atomic bomb survivors and of rodent experiments were used to show the applicability of hybrid scale models. The results proved that proposed models fit these data not less than the popular linear-quadratic models, providing the possible interpretation of shapes of dose-response curves, e.g. shouldered survival curves varied by recovery time. (author)

  19. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  20. A modified elastic foundation contact model for application in 3D models of the prosthetic knee.

    Science.gov (United States)

    Pérez-González, Antonio; Fenollosa-Esteve, Carlos; Sancho-Bru, Joaquín L; Sánchez-Marín, Francisco T; Vergara, Margarita; Rodríguez-Cervantes, Pablo J

    2008-04-01

    Different models have been used in the literature for the simulation of surface contact in biomechanical knee models. However, there is a lack of systematic comparisons of these models applied to the simulation of a common case, which will provide relevant information about their accuracy and suitability for application in models of the implanted knee. In this work a comparison of the Hertz model (HM), the elastic foundation model (EFM) and the finite element model (FEM) for the simulation of the elastic contact in a 3D model of the prosthetic knee is presented. From the results of this comparison it is found that although the nature of the EFM offers advantages when compared with that of the HM for its application to realistic prosthetic surfaces, and when compared with the FEM in CPU time, its predictions can differ from FEM in some circumstances. These differences are considerable if the comparison is performed for prescribed displacements, although they are less important for prescribed loads. To solve these problems a new modified elastic foundation model (mEFM) is proposed that maintains basically the simplicity of the original model while producing much more accurate results. In this paper it is shown that this new mEFM calculates pressure distribution and contact area with accuracy and short computation times for toroidal contacting surfaces. Although further work is needed to confirm its validity for more complex geometries the mEFM is envisaged as a good option for application in 3D knee models to predict prosthetic knee performance.

  1. Applications of CIVA NDE 10 on Eddy Current Modeling

    International Nuclear Information System (INIS)

    Nurul Ain Ahmad Latif; Ilham Mukhriz Zainal Abidin; AABdul Razak Hamzah

    2011-01-01

    CIVA NDE 10 is the simulation software and used as the platform to develop the models dedicated to Eddy Current testing (ET). It has various application in semi analytical modeling approaches. The focus of this paper is to simulate the signals response on the 40 % external groove of the Inconel 600 heat exchanger tubes with outside diameter of 22.22 mm. The inspection were simulated using 17 mm outside diameter differential probe with 100 kHz and 500 kHZ testing frequency. All the simulation results were validated using the experimental results integrated in the CIVA software. The configurations of the probe and tube consisting the flaw show the good agreement between the experimental and the simulated data. (author)

  2. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  3. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  4. Application of Service Quality Model in Education Environment

    Directory of Open Access Journals (Sweden)

    Ting Ding Hooi

    2016-02-01

    Full Text Available Most of the ideas on service quality stem from the West. The massive developments in research in the West are undeniable of their importance. This leads to the generation and development of new ideas. These ideas were subsequently channeled to developing countries. Ideas obtained were then formulated and used by these developing countries in order to obtain better approach in channeling service quality. There are ample to be learnt from the service quality model, SERVQUAL which attain high acceptance in the West. Service quality in the education system is important to guarantee the effectiveness and quality of education. Effective and quality education will be able to offer quality graduates, which will contribute to the development of the nation. This paper will discuss the application of the SERVQUAL model into the education environment.

  5. Stochastic Modelling of Linear Programming Application to Brewing Operational Systems

    Directory of Open Access Journals (Sweden)

    Akanbi O.P.

    2014-07-01

    Full Text Available System where a large number of interrelated operations exist, technically-based operational mechanism is always required to achieve potential. An intuitive solution, which is common practice in most of the breweries, perhaps may not uncover the optimal solution, as there is hardly any guarantee to satisfy the best policy application. There is always high foreign exchange involved in procurement of imported raw materials and thus increases the cost of production, abandonment and poor utilization of available locally-sourced raw materials. This study focuses on the approaches which highlight the steps and mechanisms involved in optimizing the wort extract by the use of different types of adjuncts and formulating wort production models which are useful in proffering expected solutions. Optimization techniques, the generalized models and an overview of typical brewing processes were considered.

  6. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  7. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  8. Modelling climate change policies : an application of ENERGY2020

    International Nuclear Information System (INIS)

    Timilsina, G.; Bhargava, A.; Backus, G.

    2005-01-01

    Researches and policy-makers are increasingly analyzing the economic impacts of the Kyoto Protocol at national, regional and global levels. The analyses are generally based on numerical models integrating energy, environment and the economy. Most models range from partial equilibrium types to complex multi-sector general equilibrium models, and typically represent the energy sector at an aggregate level, which limits their ability to reflect details of different sectors. In Canada, a model called ENERGY2020 has been widely used by the federal and provincial governments to analyze the sectoral and provincial impacts of implementing the Kyoto Protocol. ENERGY2020 uses stocks and flows simulation that captures the physical aspects of the processes utilizing energy, as well as the qualitative choice theory which captures human behavioural aspects. The model also has a database containing 20 years of time-series on all economic, environmental and energy variables, enabling the model to derive most parameters endogenously through econometric estimations. It has the capacity to analyze consumer and business responses over a wide range of policy initiatives such as energy environment taxes, regulatory standards for buildings, equipment and motor vehicles, grants, rebates and subsidy initiatives, consumer awareness initiatives, technology improvements, moratoriums and mandated cut-backs. It is also capable of producing long-term energy market forecasts as well as analyzing the impacts of policies in the markets. It was concluded that the model's application will serve as a useful analytical tool for a range of issues, and may be useful to developing countries and economies in transition. 6 refs., 5 figs

  9. Chemical kinetic modeling of H{sub 2} applications

    Energy Technology Data Exchange (ETDEWEB)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-09-01

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamic computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.

  10. Structure model of energy efficiency indicators and applications

    International Nuclear Information System (INIS)

    Wu, Li-Ming; Chen, Bai-Sheng; Bor, Yun-Chang; Wu, Yin-Chin

    2007-01-01

    For the purposes of energy conservation and environmental protection, the government of Taiwan has instigated long-term policies to continuously encourage and assist industry in improving the efficiency of energy utilization. While multiple actions have led to practical energy saving to a limited extent, no strong evidence of improvement in energy efficiency was observed from the energy efficiency indicators (EEI) system, according to the annual national energy statistics and survey. A structural analysis of EEI is needed in order to understand the role that energy efficiency plays in the EEI system. This work uses the Taylor series expansion to develop a structure model for EEI at the level of the process sector of industry. The model is developed on the premise that the design parameters of the process are used in comparison with the operational parameters for energy differences. The utilization index of production capability and the variation index of energy utilization are formulated in the model to describe the differences between EEIs. Both qualitative and quantitative methods for the analysis of energy efficiency and energy savings are derived from the model. Through structural analysis, the model showed that, while the performance of EEI is proportional to the process utilization index of production capability, it is possible that energy may develop in a direction opposite to that of EEI. This helps to explain, at least in part, the inconsistency between EEI and energy savings. An energy-intensive steel plant in Taiwan was selected to show the application of the model. The energy utilization efficiency of the plant was evaluated and the amount of energy that had been saved or over-used in the production process was estimated. Some insights gained from the model outcomes are helpful to further enhance energy efficiency in the plant

  11. Surrogate Model for Recirculation Phase LBLOCA and DET Application

    International Nuclear Information System (INIS)

    Fynan, Douglas A; Ahn, Kwang-Il; Lee, John C.

    2014-01-01

    In the nuclear safety field, response surfaces were used in the first demonstration of the code scaling, applicability, and uncertainty (CSAU) methodology to quantify the uncertainty of the peak clad temperature (PCT) during a large-break loss-of-coolant accident (LBLOCA). Surrogates could have applications in other nuclear safety areas such as dynamic probabilistic safety assessment (PSA). Dynamic PSA attempts to couple the probabilistic nature of failure events, component transitions, and human reliability to deterministic calculations of time-dependent nuclear power plant (NPP) responses usually through the use of thermal-hydraulic (TH) system codes. The overall mathematical complexity of the dynamic PSA architectures with many embedded computational expensive TH code calculations with large input/output data streams have limited realistic studies of NPPs. This paper presents a time-dependent surrogate model for the recirculation phase of a hot leg LBLOCA in the OPR-1000. The surrogate model is developed through the ACE algorithm, a powerful nonparametric regression technique, trained on RELAP5 simulations of the LBLOCA. Benchmarking of the surrogate is presented and an application to a simplified dynamic event tree (DET). A time-dependent surrogate model to predict core subcooling during the recirculation phase of a hot leg LBLOCA in the OPR-1000 has been developed. The surrogate assumed the structure of a general discrete time dynamic model and learned the nonlinear functional form by performing nonparametric regression on RELAP5 simulations with the ACE algorithm. The surrogate model input parameters represent mass and energy flux terms to the RCS that appeared as user supplied or code calculated boundary conditions in the RELAP5 model. The surrogate accurately predicted the TH behavior of the core for a variety of HPSI system performance and containment conditions when compared with RELAP5 simulations. The surrogate was applied in a DET application replacing

  12. Applications of a simulation model to decisions in mallard management

    Science.gov (United States)

    Cowardin, L.M.; Johnson, D.H.; Shaffer, T.L.; Sparling, D.W.

    1988-01-01

    A system comprising simulation models and data bases for habitat availability and nest success rates was used to predict results from a mallard (Anas platyrhynchos) management plan and to compare six management methods with a control. Individual treatments in the applications included land purchase for waterfowl production, wetland easement purchase, lease of uplands for waterfowl management, cropland retirement, use of no-till winter wheat, delayed cutting of alfalfa, installation of nest baskets, nesting island construction, and use of predator-resistant fencing.The simulations predicted that implementation of the management plan would increase recruits by 24%. Nest baskets were the most effective treatment, accounting for 20.4% of the recruits. No-till winter wheat was the second most effective, accounting for 5.9% of the recruits. Wetland loss due to drainage would cause an 11% loss of breeding population in 10 years.The models were modified to account for migrational homing. The modification indicated that migrational homing would enhance the effects of management. Nest success rates were critical contributions to individual management methods. The most effective treatments, such as nest baskets, had high success rates and affected a large portion of the breeding population.Economic analyses indicated that nest baskets would be the most economical of the three techniques tested. The applications indicated that the system is a useful tool to aid management decisions, but data are scarce for several important variables. Basic research will be required to adequately model the effect of migrational homing and density dependence on production. The comprehensive nature of predictions desired by managers will also require that production models like the one described here be extended to encompass the entire annual cycle of waterfowl.

  13. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  14. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  15. Active Brownian motion models and applications to ratchets

    Science.gov (United States)

    Fiasconaro, A.; Ebeling, W.; Gudowska-Nowak, E.

    2008-10-01

    We give an overview over recent studies on the model of Active Brownian Motion (ABM) coupled to reservoirs providing free energy which may be converted into kinetic energy of motion. First, we present an introduction to a general concept of active Brownian particles which are capable to take up energy from the source and transform part of it in order to perform various activities. In the second part of our presentation we consider applications of ABM to ratchet systems with different forms of differentiable potentials. Both analytical and numerical evaluations are discussed for three cases of sinusoidal, staircaselike and Mateos ratchet potentials, also with the additional loads modelled by tilted potential structure. In addition, stochastic character of the kinetics is investigated by considering perturbation by Gaussian white noise which is shown to be responsible for driving the directionality of the asymptotic flux in the ratchet. This stochastically driven directionality effect is visualized as a strong nonmonotonic dependence of the statistics of the right versus left trajectories of motion leading to a net current of particles. Possible applications of the ratchet systems to molecular motors are also briefly discussed.

  16. Validation of CFD models for hydrogen safety application

    International Nuclear Information System (INIS)

    Nikolaeva, Anna; Skibin, Alexander; Krutikov, Alexey; Golibrodo, Luka; Volkov, Vasiliy; Nechaev, Artem; Nadinskiy, Yuriy

    2015-01-01

    Most accidents involving hydrogen begin with its leakage and spreading in the air and spontaneous detonation, which is accompanied by fire or deflagration of hydrogen mixture with heat and /or shocks, which may cause harm to life and equipment. Outflow of hydrogen in a confined volume and its propagation in the volume is the worst option because of the impact of the insularity on the process of detonation. According to the safety requirements for handling hydrogen specialized systems (ventilation, sprinklers, burners etc.) are required for maintaining the hydrogen concentration less than the critical value, to eliminate the possibility of detonation and flame propagation. In this study, a simulation of helium propagation in a confined space with different methods of injection and ventilation of helium is presented, which is used as a safe replacement of hydrogen in experimental studies. Five experiments were simulated in the range from laminar to developed turbulent with different Froude numbers, which determine the regime of the helium outflow in the air. The processes of stratification and erosion of helium stratified layer were investigated. The study includes some results of OECD/NEA-PSI PANDA benchmark and some results of Gamelan project. An analysis of applicability of various turbulence models, which are used to close the system of equations of momentum transport, implemented in the commercial codes STAR CD, STAR CCM+, ANSYS CFX, was conducted for different mesh types (polyhedral and hexahedral). A comparison of computational studies results with experimental data showed a good agreement. In particular, for transition and turbulent regimes the error of the numerical results lies in the range from 5 to 15% for all turbulence models considered. This indicates applicability of the methods considered for some hydrogen safety problems. However, it should be noted that more validation research should be made to use CFD in Hydrogen safety applications with a wide

  17. Applications of the Heisenberg magnetic model in nanoscience

    International Nuclear Information System (INIS)

    Labuz, M.; Kuzma, M.; Wal, A.

    2003-01-01

    The theoretical Heisenberg magnet model and its solution given by Bethe and Hulthen (B.H.) known as Bethe Ansatz (BA) is widely applied in physics (solid state physics, quantum dots, statistical physics, high-temperatures superconductivity, low-dimensional systems, etc.), chemistry (polymers, organic metals and magnets), biology (biological molecular arrays and chains), etc. In most of the applications, the Heisenberg model is applied to infinite chains (asymptotic case), which is a good reality approximation for objects of macroscopic size. In such a case, the solutions of the model are well known. However, for objects of nanoscale size, one has to find solutions of the Heisenberg model of a finite chain consisting of N nodes. For such a chain, the problem of solving of B.H. equations is very complicated (because of the strange nonlinearity of these equations) even for very small objects N N (combinatorial explosion). In such cases, even numerical methods are helpless. In our paper, we propose an approach in which numerical methods could be adapted to such a large numerical problem, as B.H. solutions for objects consisting of N>100, which responds to nanoscale physical or biological objects. This method is based on the 'experimental' observation that B.H. solutions change in a quasi-continuous way with respect to N

  18. [Virtual audiovisual talking heads: articulatory data and models--applications].

    Science.gov (United States)

    Badin, P; Elisei, F; Bailly, G; Savariaux, C; Serrurier, A; Tarabalka, Y

    2007-01-01

    In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data. We show that these models, that exhibit a good geometrical resolution, can be controlled from articulatory data with a good temporal resolution and can thus permit the reconstruction of high quality animation of the articulators. These models, that we have integrated in a virtual talking head, can produce augmented audiovisual speech. In this framework, we have assessed the natural tongue reading capabilities of human subjects by means of audiovisual perception tests. We conclude by suggesting a number of other applications of talking heads.

  19. Modeling multibody systems with uncertainties. Part II: Numerical applications

    International Nuclear Information System (INIS)

    Sandu, Corina; Sandu, Adrian; Ahmadian, Mehdi

    2006-01-01

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties

  20. Modeling multibody systems with uncertainties. Part II: Numerical applications

    Energy Technology Data Exchange (ETDEWEB)

    Sandu, Corina, E-mail: csandu@vt.edu; Sandu, Adrian; Ahmadian, Mehdi [Virginia Polytechnic Institute and State University, Mechanical Engineering Department (United States)

    2006-04-15

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties.

  1. Applications of computational modeling in metabolic engineering of yeast.

    Science.gov (United States)

    Kerkhoven, Eduard J; Lahtvee, Petri-Jaan; Nielsen, Jens

    2015-02-01

    Generally, a microorganism's phenotype can be described by its pattern of metabolic fluxes. Although fluxes cannot be measured directly, inference of fluxes is well established. In biotechnology the aim is often to increase the capacity of specific fluxes. For this, metabolic engineering methods have been developed and applied extensively. Many of these rely on balancing of intracellular metabolites, redox, and energy fluxes, using genome-scale models (GEMs) that in combination with appropriate objective functions and constraints can be used to predict potential gene targets for obtaining a preferred flux distribution. These methods point to strategies for altering gene expression; however, fluxes are often controlled by post-transcriptional events. Moreover, GEMs are usually not taking into account metabolic regulation, thermodynamics and enzyme kinetics. To facilitate metabolic engineering, tools from synthetic biology have emerged, enabling integration and assembly of naturally nonexistent, but well-characterized components into a living organism. To describe these systems kinetic models are often used and to integrate these systems with the standard metabolic engineering approach, it is necessary to expand the modeling of metabolism to consider kinetics of individual processes. This review will give an overview about models available for metabolic engineering of yeast and discusses their applications. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.

  2. Validation of elastic cross section models for space radiation applications

    Energy Technology Data Exchange (ETDEWEB)

    Werneth, C.M., E-mail: charles.m.werneth@nasa.gov [NASA Langley Research Center (United States); Xu, X. [National Institute of Aerospace (United States); Norman, R.B. [NASA Langley Research Center (United States); Ford, W.P. [The University of Tennessee (United States); Maung, K.M. [The University of Southern Mississippi (United States)

    2017-02-01

    The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.

  3. Analysis and application of opinion model with multiple topic interactions.

    Science.gov (United States)

    Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng

    2017-08-01

    To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.

  4. Educational and Scientific Applications of Climate Model Diagnostic Analyzer

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.

    2016-12-01

    Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of

  5. Brookhaven Regional Energy Facility Siting Model (REFS): model development and application

    Energy Technology Data Exchange (ETDEWEB)

    Meier, P.; Hobbs, B.; Ketcham, G.; McCoy, M.; Stern, R.

    1979-06-01

    A siting methodology developed specifically to bridge the gap between regional-energy-system scenarios and environmental transport models is documented. Development of the model is described in Chapter 1. Chapter 2 described the basic structure of such a model. Additional chapters on model development cover: generation, transmission, demand disaggregation, the interface to other models, computational aspects, the coal sector, water resource considerations, and air quality considerations. These subjects comprise Part I. Part II, Model Applications, covers: analysis of water resource constraints, water resource issues in the New York Power Pool, water resource issues in the New England Power Pool, water resource issues in the Pennsylvania-Jersey-Maryland Power Pool, and a summary of water resource constraint analysis. (MCW)

  6. Conceptual study of the application software manager using the Xlet model in the nuclear fields

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon Koo; Park, Heui Youn; Koo, In Soo [KAERI, Taejon (Korea, Republic of); Park, Hee Seok; Kim, Jung Seon; Sohn, Chang Ho [Samchang Enterprise Co., Ltd., Taejon (Korea, Republic of)

    2003-10-01

    In order to reduce the cost of software maintenance including software modification, we suggest the object oriented program with checking the version of application program using the Java language and the technique of executing the downloaded application program via network using the application manager. In order to change the traditional scheduler to the application manager we have adopted the Xlet concept in the nuclear fields using the network. In usual Xlet means a Java application that runs on the digital television receiver. The Java TV Application Program Interface(API) defines an application model called the Xlet application lifecycle. Java applications that use this lifecycle model are called Xlets. The Xlet application lifecycle is compatible with the existing application environment and virtual machine technology. The Xlet application lifecycle model defines the dialog (protocol) between an Xlet and its environment.

  7. Conceptual study of the application software manager using the Xlet model in the nuclear fields

    International Nuclear Information System (INIS)

    Lee, Joon Koo; Park, Heui Youn; Koo, In Soo; Park, Hee Seok; Kim, Jung Seon; Sohn, Chang Ho

    2003-01-01

    In order to reduce the cost of software maintenance including software modification, we suggest the object oriented program with checking the version of application program using the Java language and the technique of executing the downloaded application program via network using the application manager. In order to change the traditional scheduler to the application manager we have adopted the Xlet concept in the nuclear fields using the network. In usual Xlet means a Java application that runs on the digital television receiver. The Java TV Application Program Interface(API) defines an application model called the Xlet application lifecycle. Java applications that use this lifecycle model are called Xlets. The Xlet application lifecycle is compatible with the existing application environment and virtual machine technology. The Xlet application lifecycle model defines the dialog (protocol) between an Xlet and its environment

  8. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    Science.gov (United States)

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  9. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  10. Exploring the Application of Capital Facility Investment Justification Model

    Directory of Open Access Journals (Sweden)

    Marijan Karić

    2013-07-01

    Full Text Available For decades now, the models for identifying and quantifying the level of risk of investment projects and investment justification evaluation have been the subject of investigation by members of professional and research communities. It is important to quantify the level of risk because by evaluating investment justification in terms of the risks involved, the decision-maker (investor is able to choose from available alternatives the one that will achieve the most favourable ratio of expected profit to the assumed risk. In this way, the economic entity can raise its productivity, profitability and the quality of business operation in general. The aim of this paper was to investigate the extent to which medium and large companies have been using modern methods of investment justification evaluation in their decision-making process and determine the level of quality of the application of the selected methods in practice. The study was conducted on a sample of medium and large enterprises in the eastern Croatia during 2011 and 2012, and it was established that despite the fact that a large number of modern investment project profitability and risk assessment models have been developed, the level of their application in practice is not high enough. The analyzed investment proposals included only basic methods of capital budgeting without risk assessment. Hence, it was concluded that individual investors were presented with low-quality and incomplete investment justification evaluation results on the basis of which the decisions of key importance for the development of the economic entity as a whole were made. This paper aims to underline the need for financial managers to get informed and educate themselves about contemporary investment project profitability and risk assessment models as well as the need to create educational programmes and computer solutions that will encourage key people in companies to acquire new knowledge and apply modern

  11. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  12. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP. 

  13. Three essays on multi-level optimization models and applications

    Science.gov (United States)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  14. Testing simulation and structural models with applications to energy demand

    Science.gov (United States)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality

  15. Can We Trust Computational Modeling for Medical Applications?

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry

    2015-01-01

    Operations in extreme environments such as spaceflight pose human health risks that are currently not well understood and potentially unanticipated. In addition, there are limited clinical and research data to inform development and implementation of therapeutics for these unique health risks. In this light, NASA's Human Research Program (HRP) is leveraging biomedical computational models and simulations (M&S) to help inform, predict, assess and mitigate spaceflight health and performance risks, and enhance countermeasure development. To ensure that these M&S can be applied with confidence to the space environment, it is imperative to incorporate a rigorous verification, validation and credibility assessment (VV&C) processes to ensure that the computational tools are sufficiently reliable to answer questions within their intended use domain. In this presentation, we will discuss how NASA's Integrated Medical Model (IMM) and Digital Astronaut Project (DAP) have successfully adapted NASA's Standard for Models and Simulations, NASA-STD-7009 (7009) to achieve this goal. These VV&C methods are also being leveraged by organization such as the Food and Drug Administration (FDA), National Institute of Health (NIH) and the American Society of Mechanical Engineers (ASME) to establish new M&S VV&C standards and guidelines for healthcare applications. Similarly, we hope to provide some insight to the greater aerospace medicine community on how to develop and implement M&S with sufficient confidence to augment medical research and operations.

  16. Computational multiscale modeling of fluids and solids theory and applications

    CERN Document Server

    Steinhauser, Martin Oliver

    2017-01-01

    The idea of the book is to provide a comprehensive overview of computational physics methods and techniques, that are used for materials modeling on different length and time scales. Each chapter first provides an overview of the basic physical principles which are the basis for the numerical and mathematical modeling on the respective length-scale. The book includes the micro-scale, the meso-scale and the macro-scale, and the chapters follow this classification. The book explains in detail many tricks of the trade of some of the most important methods and techniques that are used to simulate materials on the perspective levels of spatial and temporal resolution. Case studies are included to further illustrate some methods or theoretical considerations. Example applications for all techniques are provided, some of which are from the author’s own contributions to some of the research areas. The second edition has been expanded by new sections in computational models on meso/macroscopic scales for ocean and a...

  17. Application of Molecular Modeling to Urokinase Inhibitors Development

    Directory of Open Access Journals (Sweden)

    V. B. Sulimov

    2014-01-01

    Full Text Available Urokinase-type plasminogen activator (uPA plays an important role in the regulation of diverse physiologic and pathologic processes. Experimental research has shown that elevated uPA expression is associated with cancer progression, metastasis, and shortened survival in patients, whereas suppression of proteolytic activity of uPA leads to evident decrease of metastasis. Therefore, uPA has been considered as a promising molecular target for development of anticancer drugs. The present study sets out to develop the new selective uPA inhibitors using computer-aided structural based drug design methods. Investigation involves the following stages: computer modeling of the protein active site, development and validation of computer molecular modeling methods: docking (SOL program, postprocessing (DISCORE program, direct generalized docking (FLM program, and the application of the quantum chemical calculations (MOPAC package, search of uPA inhibitors among molecules from databases of ready-made compounds to find new uPA inhibitors, and design of new chemical structures and their optimization and experimental examination. On the basis of known uPA inhibitors and modeling results, 18 new compounds have been designed, calculated using programs mentioned above, synthesized, and tested in vitro. Eight of them display inhibitory activity and two of them display activity about 10 μM.

  18. A clinically applicable six-segmented foot model.

    Science.gov (United States)

    De Mits, Sophie; Segers, Veerle; Woodburn, Jim; Elewaut, Dirk; De Clercq, Dirk; Roosen, Philip

    2012-04-01

    We describe a multi-segmented foot model comprising lower leg, rearfoot, midfoot, lateral forefoot, medial forefoot, and hallux for routine use in a clinical setting. The Ghent Foot Model describes the kinematic patterns of functional units of the foot, especially the midfoot, to investigate patient populations where midfoot deformation or dysfunction is an important feature, for example, rheumatoid arthritis patients. Data were obtained from surface markers by a 6 camera motion capture system at 500 Hz. Ten healthy subjects walked barefoot along a 12 m walkway at self-selected speed. Joint angles (rearfoot to shank, midfoot to rearfoot, lateral and medial forefoot to midfoot, and hallux to medial forefoot) in the sagittal, frontal, and transverse plane are reported according to anatomically based reference frames. These angles were calculated and reported during the foot rollover phases in stance, detected by synchronized plantar pressure measurements. Repeated measurements of each subject revealed low intra-subject variability, varying between 0.7° and 2.3° for the minimum values, between 0.5° and 2.1° for the maximum values, and between 0.8° and 5.8° for the ROM. The described movement patterns were repeatable and consistent with biomechanical and clinical knowledge. As such, the Ghent Foot model permits intersegment, in vivo motion measurement of the foot, which is crucial for both clinical and research applications. Copyright © 2011 Orthopaedic Research Society.

  19. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  20. Predictive modeling of addiction lapses in a mobile health application.

    Science.gov (United States)

    Chih, Ming-Yuan; Patton, Timothy; McTavish, Fiona M; Isham, Andrew J; Judkins-Fisher, Chris L; Atwood, Amy K; Gustafson, David H

    2014-01-01

    The chronically relapsing nature of alcoholism leads to substantial personal, family, and societal costs. Addiction-comprehensive health enhancement support system (A-CHESS) is a smartphone application that aims to reduce relapse. To offer targeted support to patients who are at risk of lapses within the coming week, a Bayesian network model to predict such events was constructed using responses on 2,934 weekly surveys (called the Weekly Check-in) from 152 alcohol-dependent individuals who recently completed residential treatment. The Weekly Check-in is a self-monitoring service, provided in A-CHESS, to track patients' recovery progress. The model showed good predictability, with the area under receiver operating characteristic curve of 0.829 in the 10-fold cross-validation and 0.912 in the external validation. The sensitivity/specificity table assists the tradeoff decisions necessary to apply the model in practice. This study moves us closer to the goal of providing lapse prediction so that patients might receive more targeted and timely support. © 2013.

  1. Application of modeling to local chemistry in PWR steam generators

    International Nuclear Information System (INIS)

    Fauchon, C.; Millett, P.J.; Ollar, P.

    1998-01-01

    Localized corrosion of the SG tubes and other components is due to the presence of an aggressive environment in local crevices and occluded regions. In crevices and on vertical and horizontal tube surfaces, corrosion products and particulate matter can accumulate in the form of porous deposits. The SG water contains impurities at extremely low levels (ppb). Low levels of non-volatile impurities, however, can be efficiently concentrated in crevices and sludge piles by a thermal hydraulic mechanism. The temperature gradient across the SG tube coupled with local flow starvation, produces local boiling in the sludge and crevices. Since mass transfer processes are inhibited in these geometries, the residual liquid becomes enriched in many of the species present in the SG water. The resulting concentrated solutions have been shown to be aggressive and can corrode the SG materials. This corrosion may occur under various conditions which result in different types of attack such as pitting, stress corrosion cracking, wastage and denting. A major goal of EPRI's research program has been the development of models of the concentration process and the resulting chemistry. An improved understanding should eventually allow utilities to reduce or eliminate the corrosion by the appropriate manipulation of the steam generator water chemistry and or crevice conditions. The application of these models to experimental data obtained for prototypical SG tube support crevices is described in this paper. The models adequately describe the key features of the experimental data allowing extrapolations to be made to plant conditions. (author)

  2. A novel modular multilevel converter modelling technique based on semi-analytical models for HVDC application

    Directory of Open Access Journals (Sweden)

    Ahmed Zama

    2016-12-01

    Full Text Available Thanks to scalability, performance and efficiency, the Modular Multilevel Converter (MMC, since its invention, becomes an attractive topology in industrial applications such as high voltage direct current (HVDC transmission system. However, modelling challenges related to the high number of switching elements in the MMC are highlighted when such systems are integrated into large simulated networks for stability or protection algorithms testing. In this work, a novel dynamic models for MMC is proposed. The proposed models are intended to simplify modeling challenges related to the high number of switching elements in the MMC. The models can be easily used to simulate the converter for stability analysis or protection algorithms for HVDC grids.

  3. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  4. Selected developments and applications of Leontief models in industrial ecology

    International Nuclear Information System (INIS)

    Stroemman, Anders Hammer

    2005-01-01

    extended for this study through the application of multi-objective optimization techniques and is used to explore efficient trade offs between reducing CO2 emissions and increasing global factor costs. Concluding Remarks: It has been the scope of this work to contribute to map the interdisciplinary landscape between input-output analysis and industrial ecology. The three first papers enters this landscape from the Industrial Ecology side, more specifically form the Life Cycle Assessment platform and the two latter from the input-output paradigm. The fundamental learning obtained is that the linear section of this landscape is described by Leontief models. Both Life Cycle Assessment, Mass Flow Analysis and Substance Flow Analysis etc. can be represented on the mathematical form proposed by Leontief. The input output framework offers a well- developed set of methodologies that can bridge the various sub-fields of industrial ecology addressing question related to inter-process flows. It seems that an acknowledgement of Leontief models as the base framework for the family of linear models in industrial ecology would be beneficial. Following the acknowledgement of Leontief's work comes that of Dantzig and the development of linear programming. In investigating alternative arrangements of production and combinations of technologies to produce a given good, the common practice in LCA has been total enumeration of all scenarios. This might be feasible, and for that sake desirable, for a limited amount combinations. However as the complexity and number of alternatives increases this will not be feasible. Dantzig invented Linear programming to address exactly this type of problem. The scientific foundation provided by Leontief and Dantzig has been crucial to the work in this thesis. It is my belief that the impact to industrial ecology of their legacy will increase further in the years to come. (Author)

  5. On the limits of application of the nolocal quark model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Novitsyn, E.A.; Ryabtsev, A.D.

    1983-01-01

    The possibility of application of the nolocal quark model (NQM) to the physics of mesons, containin charmed quarks, is considered. A method for description of states with identical quantum numbers is suggested. I' order to distinguish between such states different quark currents are introduced with additional condition of ''o thogonality'' implied. The latter allows one to neglect nondiagonal off-shell matrix elements in compositeness conditi ' for coupling constants. In the framework of NQM with ditional assumptions mentioned several decay widths of vector charmonium states have been computed, namely lepton c widths of J/psi(3100), psi'(3685) and psi(3770) an the decay width into charmed D-mesons psi(3770) → D nti D. It is shown that the two-parametric freedom of the m del is not sufficient to fit the experimental data. It is co'cluded that the revision of basic concepts of NQM is nec ssary in physics of mesons containing c-quarks

  6. Modelling of gecko foot for future robot application

    Science.gov (United States)

    Kamaruddin, A.; Ong, N. R.; Aziz, M. H. A.; Alcain, J. B.; Haimi, W. M. W. N.; Sauli, Z.

    2017-09-01

    Every gecko has an approximately million microscale hairs called setae which made it easy for them to cling from different surfaces at any orientation with the aid of Van der Waals force as the primary mechanism used to adhere to any contact surfaces. In this paper, a strain simulation using Comsol Multiphysic Software was conducted on a 3D MEMS model of an actuated gecko foot with the aim of achieving optimal sticking with various polymetric materials for future robots application. Based on the stress and strain analyses done on the seven different polymers, it was found that polysilicon had the best result which was nearest to 0%, indicating the strongest elasticity among the others. PDMS on the hand, failed in the simulation due to its bulk-like nature. Thus, PDMS was not suitable to be used for further study on gecko foot robot.

  7. Urban design and modeling: applications and perspectives on GIS

    Directory of Open Access Journals (Sweden)

    Roberto Mingucci

    2013-05-01

    Full Text Available In recent years, GIS systems have evolved because of technological advancements that make possible the simultaneous management of multiple amount of information.Interesting aspects in their application concern the site documentation at the territorial scale taking advantage of CAD/BIM systems, usually working at the building scale instead.In this sense, the survey using sophisticated equipment such as laser scanners or UAV drones quickly captures data that can be enjoyed across even through new “mobile” technologies, operating in the web-based information systems context. This paper aims to investigate use and perspectives pertaining to geographic information technologies, analysis and design tools meant for modeling at different scales, referring to results of research experiences conducted at the University of Bologna.

  8. Mathematical modeling of control subsystems for CELSS: Application to diet

    Science.gov (United States)

    Waleh, Ahmad; Nguyen, Thoi K.; Kanevsky, Valery

    1991-01-01

    The dynamic control of a Closed Ecological Life Support System (CELSS) in a closed space habitat is of critical importance. The development of a practical method of control is also a necessary step for the selection and design of realistic subsystems and processors for a CELSS. Diet is one of the dynamic factors that strongly influences, and is influenced, by the operational states of all major CELSS subsystems. The problems of design and maintenance of a stable diet must be obtained from well characterized expert subsystems. The general description of a mathematical model that forms the basis of an expert control program for a CELSS is described. The formulation is expressed in terms of a complete set of time dependent canonical variables. System representation is dynamic and includes time dependent storage buffers. The details of the algorithm are described. The steady state results of the application of the method for representative diets made from wheat, potato, and soybean are presented.

  9. Dynamic behaviours of mix-game model and its application

    Institute of Scientific and Technical Information of China (English)

    Gou Cheng-Ling

    2006-01-01

    In this paper a minority game (MG) is modified by adding into it some agents who play a majority game. Such a game is referred to as a mix-game. The highlight of this model is that the two groups of agents in the mix-game have different bounded abilities to deal with historical information and to count their own performance. Through simulations,it is found that the local volatilities change a lot by adding some agents who play the majority game into the MG,and the change of local volatilities greatly depends on different combinations of historical memories of the two groups.Furthermore, the analyses of the underlying mechanisms for this finding are made. The applications of mix-game mode are also given as an example.

  10. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  11. Low Dimensional Semiconductor Structures Characterization, Modeling and Applications

    CERN Document Server

    Horing, Norman

    2013-01-01

    Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included h...

  12. Directed Abelian algebras and their application to stochastic models.

    Science.gov (United States)

    Alcaraz, F C; Rittenberg, V

    2008-10-01

    With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma_(tau)=32 ). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma_(tau)=1.780+/-0.005 .

  13. Modelling and Designing Cryogenic Hydrogen Tanks for Future Aircraft Applications

    Directory of Open Access Journals (Sweden)

    Christopher Winnefeld

    2018-01-01

    Full Text Available In the near future, the challenges to reduce the economic and social dependency on fossil fuels must be faced increasingly. A sustainable and efficient energy supply based on renewable energies enables large-scale applications of electro-fuels for, e.g., the transport sector. The high gravimetric energy density makes liquefied hydrogen a reasonable candidate for energy storage in a light-weight application, such as aviation. Current aircraft structures are designed to accommodate jet fuel and gas turbines allowing a limited retrofitting only. New designs, such as the blended-wing-body, enable a more flexible integration of new storage technologies and energy converters, e.g., cryogenic hydrogen tanks and fuel cells. Against this background, a tank-design model is formulated, which considers geometrical, mechanical and thermal aspects, as well as specific mission profiles while considering a power supply by a fuel cell. This design approach enables the determination of required tank mass and storage density, respectively. A new evaluation value is defined including the vented hydrogen mass throughout the flight enabling more transparent insights on mass shares. Subsequently, a systematic approach in tank partitioning leads to associated compromises regarding the tank weight. The analysis shows that cryogenic hydrogen tanks are highly competitive with kerosene tanks in terms of overall mass, which is further improved by the use of a fuel cell.

  14. Semantic Technologies for Nuclear Knowledge Modelling and Applications

    International Nuclear Information System (INIS)

    Beraha, D.; Gladyshev, M.

    2016-01-01

    Full text: The IAEA has been engaged in working with Member States to preserve and enhance nuclear knowledge, and in supporting wide dissemination of safety related technical and technological information enhancing nuclear safety. The knowledge organization systems (ontologies, taxonomies, thesauri, etc.) provide one of the means to model and structure a given knowledge domain. The significance of knowledge organization systems (KOS) has been greatly enhanced by the evolution of the semantic technologies, enabling machines to “understand” the concepts described in a KOS, and to use them in a variety of applications. Over recent years semantic technologies have emerged as efficient means to improve access to information and knowledge. The Semantic Web Standards play an important role in creating an infrastructure of interoperable data sources based on principles of Linked Data. The status of utilizing semantic technologies in the nuclear domain is shortly reviewed, noting that such technologies are in their early stage of adoption, and considering some aspects which are specific to nuclear knowledge management. Several areas are described where semantic technologies are already deployed, and other areas are indicated where applications based on semantic technologies will have a strong impact on nuclear knowledge management in the near future. (author

  15. Potential biodefense model applications for portable chlorine dioxide gas production.

    Science.gov (United States)

    Stubblefield, Jeannie M; Newsome, Anthony L

    2015-01-01

    Development of decontamination methods and strategies to address potential infectious disease outbreaks and bioterrorism events are pertinent to this nation's biodefense strategies and general biosecurity. Chlorine dioxide (ClO2) gas has a history of use as a decontamination agent in response to an act of bioterrorism. However, the more widespread use of ClO2 gas to meet current and unforeseen decontamination needs has been hampered because the gas is too unstable for shipment and must be prepared at the application site. Newer technology allows for easy, onsite gas generation without the need for dedicated equipment, electricity, water, or personnel with advanced training. In a laboratory model system, 2 unique applications (personal protective equipment [PPE] and animal skin) were investigated in the context of potential development of decontamination protocols. Such protocols could serve to reduce human exposure to bacteria in a decontamination response effort. Chlorine dioxide gas was capable of reducing (2-7 logs of vegetative and spore-forming bacteria), and in some instances eliminating, culturable bacteria from difficult to clean areas on PPE facepieces. The gas was effective in eliminating naturally occurring bacteria on animal skin and also on skin inoculated with Bacillus spores. The culturable bacteria, including Bacillus spores, were eliminated in a time- and dose-dependent manner. Results of these studies suggested portable, easily used ClO2 gas generation systems have excellent potential for protocol development to contribute to biodefense strategies and decontamination responses to infectious disease outbreaks or other biothreat events.

  16. Application of the evolution theory in modelling of innovation diffusion

    Directory of Open Access Journals (Sweden)

    Krstić Milan

    2016-01-01

    Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.

  17. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas [KTH Royal Institute of Technology, Stockholm (Sweden)

    2007-01-01

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e±, ve, $\\bar{v}$e, vμ and $\\bar{μ}$e--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c2. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a

  18. Numerical modeling for an electric-field hyperthermia applicator

    Science.gov (United States)

    Wu, Te-Kao; Chou, C. K.; Chan, K. W.; Mcdougall, J.

    1993-01-01

    Hyperthermia, in conjunction with radiation and chemotherapy for treatment of cancers, is an area of current concern. Experiments have shown that hyperthermia can increase the potency of many chemotherapy drugs and the effectiveness of radiation for treating cancer. A combination of whole body or regional hyperthermia with chemotherapy or radiation should improve treatment results. Conventional methods for inducing whole body hyperthermia, such as exposing a patient in a radiant cabinet or under a hot water blanket, conduct heat very slowly from the skin to the body core. Thus a more efficient system, such as the three-plate electric-field hyperthermia applicator (EHA), is developed. This three-plate EHA has one top plate over and two lower plates beneath the patient. It is driven at 27.12 MHz with 500 Watts through a matching circuit. Using this applicator, a 50 kg pig was successfully heated to 42 C within 45 minutes. However, phantom and animal studies have indicated non-uniform heating near the side of the body. In addition, changes in the size and distance between the electrode plates can affect the heating (or electromagnetic field) pattern. Therefore, numerical models using the method of moments (MOM) or the finite difference time domain (FDTD) technique are developed to optimize the heating pattern of this EHA before it is used for human trials. The accuracy of the numerical modeling has been achieved by the good agreement between the MOM and FDTD results for the three-plate EHA without a biological body. The versatile FDTD technique is then applied to optimize the EHA design with a human body. Both the numerical and measured data in phantom blocks will be presented. The results of this study will be used to design an optimized system for whole body or regional hyperthermia.

  19. X-ray ablation measurements and modeling for ICF applications

    International Nuclear Information System (INIS)

    Anderson, A.T.

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths (∼ micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation

  20. Animal models of osteogenesis imperfecta: applications in clinical research

    Directory of Open Access Journals (Sweden)

    Enderli TA

    2016-09-01

    Full Text Available Tanya A Enderli, Stephanie R Burtch, Jara N Templet, Alessandra Carriero Department of Biomedical Engineering, Florida Institute of Technology, Melbourne, FL, USA Abstract: Osteogenesis imperfecta (OI, commonly known as brittle bone disease, is a genetic disease characterized by extreme bone fragility and consequent skeletal deformities. This connective tissue disorder is caused by mutations in the quality and quantity of the collagen that in turn affect the overall mechanical integrity of the bone, increasing its vulnerability to fracture. Animal models of the disease have played a critical role in the understanding of the pathology and causes of OI and in the investigation of a broad range of clinical therapies for the disease. Currently, at least 20 animal models have been officially recognized to represent the phenotype and biochemistry of the 17 different types of OI in humans. These include mice, dogs, and fish. Here, we describe each of the animal models and the type of OI they represent, and present their application in clinical research for treatments of OI, such as drug therapies (ie, bisphosphonates and sclerostin and mechanical (ie, vibrational loading. In the future, different dosages and lengths of treatment need to be further investigated on different animal models of OI using potentially promising treatments, such as cellular and chaperone therapies. A combination of therapies may also offer a viable treatment regime to improve bone quality and reduce fragility in animals before being introduced into clinical trials for OI patients. Keywords: OI, brittle bone, clinical research, mouse, dog, zebrafish