WorldWideScience

Sample records for metrazol models comparing

  1. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  2. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  3. COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS

    OpenAIRE

    Sandeep Kaur*

    2017-01-01

    No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need

  4. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  5. Wellness Model of Supervision: A Comparative Analysis

    Science.gov (United States)

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  6. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  7. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  8. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  9. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  10. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  11. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  12. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  13. Comparing Realistic Subthalamic Nucleus Neuron Models

    Science.gov (United States)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  14. Comparing holographic dark energy models with statefinder

    International Nuclear Information System (INIS)

    Cui, Jing-Lei; Zhang, Jing-Fei

    2014-01-01

    We apply the statefinder diagnostic to the holographic dark energy models, including the original holographic dark energy (HDE) model, the new holographic dark energy model, the new agegraphic dark energy (NADE) model, and the Ricci dark energy model. In the low-redshift region the holographic dark energy models are degenerate with each other and with the ΛCDM model in the H(z) and q(z) evolutions. In particular, the HDE model is highly degenerate with the ΛCDM model, and in the HDE model the cases with different parameter values are also in strong degeneracy. Since the observational data are mainly within the low-redshift region, it is very important to break this lowredshift degeneracy in the H(z) and q(z) diagnostics by using some quantities with higher order derivatives of the scale factor. It is shown that the statefinder diagnostic r(z) is very useful in breaking the low-redshift degeneracies. By employing the statefinder diagnostic the holographic dark energy models can be differentiated efficiently in the low-redshift region. The degeneracy between the holographic dark energy models and the ΛCDM model can also be broken by this method. Especially for the HDE model, all the previous strong degeneracies appearing in the H(z) and q(z) diagnostics are broken effectively. But for the NADE model, the degeneracy between the cases with different parameter values cannot be broken, even though the statefinder diagnostic is used. A direct comparison of the holographic dark energy models in the r-s plane is also made, in which the separations between the models (including the ΛCDM model) can be directly measured in the light of the current values {r 0 , s 0 } of the models. (orig.)

  15. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  16. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available Group Fallback only No Damballa, 2008 Crime Case studies Lone No No Owens et al, 2009 Warfare Literature Group Yes Yes Croom, 2010 Crime (APT) Case studies Group No No Dreijer, 2011 Warfare Previous models and case studies Group Yes No Van... be needed by a geographically or functionally distributed group of attackers. While some of the models describe the installation of a backdoor or an advanced persistent threat (APT), none of them describe the behaviour involved in returning to a...

  17. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  18. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  19. Modele bicamerale comparate. Romania: Monocameralism versus bicameralism

    Directory of Open Access Journals (Sweden)

    Cynthia Carmen CURT

    2007-06-01

    Full Text Available The paper attempts to evaluate the Romanian bicameral model as well as to identify and critically asses which are the options our country has in choosing between unicameral and bicameral system. The analysis attempts to observe the characteristics of some Second Chambers that are related to Romanian bicameralism by influencing the configuration of the Romanian bicameral legislature, or which devised constitutional mechanisms can be used in order to preserve an efficient bicameral formula. Also the alternative of giving up the bicameral formula due to some arguments related to the simplification and the efficiency of the legislative procedure is explored.

  20. A Model for Comparing Free Cloud Platforms

    Directory of Open Access Journals (Sweden)

    Radu LIXANDROIU

    2014-01-01

    Full Text Available VMware, VirtualBox, Virtual PC and other popular desktop virtualization applications are used only by a few users of IT techniques. This article attempts to make a comparison model for choosing the best cloud platform. Many virtualization applications such as VMware (VMware Player, Oracle VirtualBox and Microsoft Virtual PC are free for home users. The main goal of the virtualization software is that it allows users to run multiple operating systems simultane-ously on one virtual environment, using one computer desktop.

  1. COMPARING OF DEPOSIT MODEL AND LIFE INSURANCE MODEL IN MACEDONIA

    Directory of Open Access Journals (Sweden)

    TATJANA ATANASOVA-PACHEMSKA

    2016-02-01

    Full Text Available In conditions of the continuous decline of the interest rates for bank deposits, and at a time when uncertainty about the future is increasing, physical and legal persons have doubts how to secure their future or how and where to invest their funds and thus to “fertilize” and increase their savings. Individuals usually choose to put their savings in the bank for a certain period, and for that period to receive certain interest, or decide to invest their savings in different types of life insurance and thus to "take care" of their life, their future and the future of their families. In mathematics are developed many models that relate to the compounding and the insurance. This paper is a comparison of the deposit model and the model of life insurance

  2. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  3. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  4. comparative analysis of some existing kinetic models with proposed

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.

  5. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...

  6. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  7. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  8. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  9. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Comparing live and remote models in eating conformity research.

    Science.gov (United States)

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  11. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  12. A comparative review of radiation-induced cancer risk models

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Hee; Kim, Ju Youl [FNC Technology Co., Ltd., Yongin (Korea, Republic of); Han, Seok Jung [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    With the need for a domestic level 3 probabilistic safety assessment (PSA), it is essential to develop a Korea-specific code. Health effect assessments study radiation-induced impacts; in particular, long-term health effects are evaluated in terms of cancer risk. The objective of this study was to analyze the latest cancer risk models developed by foreign organizations and to compare the methodology of how they were developed. This paper also provides suggestions regarding the development of Korean cancer risk models. A review of cancer risk models was carried out targeting the latest models: the NUREG model (1993), the BEIR VII model (2006), the UNSCEAR model (2006), the ICRP 103 model (2007), and the U.S. EPA model (2011). The methodology of how each model was developed is explained, and the cancer sites, dose and dose rate effectiveness factor (DDREF) and mathematical models are also described in the sections presenting differences among the models. The NUREG model was developed by assuming that the risk was proportional to the risk coefficient and dose, while the BEIR VII, UNSCEAR, ICRP, and U.S. EPA models were derived from epidemiological data, principally from Japanese atomic bomb survivors. The risk coefficient does not consider individual characteristics, as the values were calculated in terms of population-averaged cancer risk per unit dose. However, the models derived by epidemiological data are a function of sex, exposure age, and attained age of the exposed individual. Moreover, the methodologies can be used to apply the latest epidemiological data. Therefore, methodologies using epidemiological data should be considered first for developing a Korean cancer risk model, and the cancer sites and DDREF should also be determined based on Korea-specific studies. This review can be used as a basis for developing a Korean cancer risk model in the future.

  13. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    Science.gov (United States)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  14. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  15. Comparing the line broadened quasilinear model to Vlasov code

    International Nuclear Information System (INIS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-01-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations

  16. Comparing the line broadened quasilinear model to Vlasov code

    Energy Technology Data Exchange (ETDEWEB)

    Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  17. Comparing the line broadened quasilinear model to Vlasov code

    Science.gov (United States)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  18. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  19. Comparative Assessment of Nonlocal Continuum Solvent Models Exhibiting Overscreening

    Directory of Open Access Journals (Sweden)

    Ren Baihua

    2017-01-01

    Full Text Available Nonlocal continua have been proposed to offer a more realistic model for the electrostatic response of solutions such as the electrolyte solvents prominent in biology and electrochemistry. In this work, we review three nonlocal models based on the Landau-Ginzburg framework which have been proposed but not directly compared previously, due to different expressions of the nonlocal constitutive relationship. To understand the relationships between these models and the underlying physical insights from which they are derive, we situate these models into a single, unified Landau-Ginzburg framework. One of the models offers the capacity to interpret how temperature changes affect dielectric response, and we note that the variations with temperature are qualitatively reasonable even though predictions at ambient temperatures are not quantitatively in agreement with experiment. Two of these models correctly reproduce overscreening (oscillations between positive and negative polarization charge densities, and we observe small differences between them when we simulate the potential between parallel plates held at constant potential. These computations require reformulating the two models as coupled systems of local partial differential equations (PDEs, and we use spectral methods to discretize both problems. We propose further assessments to discriminate between the models, particularly in regards to establishing boundary conditions and comparing to explicit-solvent molecular dynamics simulations.

  20. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  1. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  2. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  3. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  4. Comparing several boson mappings with the shell model

    International Nuclear Information System (INIS)

    Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.

    1990-01-01

    Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases

  5. Towards consensus in comparative chemical characterization modeling for LCIA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Bachmann, Till; Huijbregts, Mark

    2006-01-01

    work within, for instance, the OECD, and guidance from a series of expert workshops held between 2002 and 2005, preliminary guidelines focusing on chemical fate, and human and ecotoxic effects were established. For further elaboration of the fate-, exposure- and effect-sides of the modeling, six models...... by the Task Force and the model providers. While the compared models and their differences are important tools to further advance LCA science, the consensus model is intended to provide a generally agreed and scientifically sound method to calculate consistent characterization factors for use in LCA practice...... and to be the basis of the “recommended practice” for calculation of characterization factors for chemicals under authority of the UNEP/SETAC Life Cycle Initiative....

  6. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  8. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  10. Comparative modeling of InP solar cell structures

    Science.gov (United States)

    Jain, R. K.; Weinberg, I.; Flood, D. J.

    1991-01-01

    The comparative modeling of p(+)n and n(+)p indium phosphide solar cell structures is studied using a numerical program PC-1D. The optimal design study has predicted that the p(+)n structure offers improved cell efficiencies as compared to n(+)p structure, due to higher open-circuit voltage. The various cell material and process parameters to achieve the maximum cell efficiencies are reported. The effect of some of the cell parameters on InP cell I-V characteristics was studied. The available radiation resistance data on n(+)p and p(+)p InP solar cells are also critically discussed.

  11. ANTICONVULSANT AND ANTIEPILEPTIC ACTIONS OF 2-DEOXY-DGLUCOSE IN EPILEPSY MODELS

    Science.gov (United States)

    Stafstrom, Carl E.; Ockuly, Jeffrey C.; Murphree, Lauren; Valley, Matthew T.; Roopra, Avtar; Sutula, Thomas P.

    2009-01-01

    Objective Conventional anticonvulsants reduce neuronal excitability through effects on ion channels and synaptic function. Anticonvulsant mechanisms of the ketogenic diet remain incompletely understood. Since carbohydrates are restricted in patients on the ketogenic diet, we evaluated the effects of limiting carbohydrate availability by reducing glycolysis using the glycolytic inhibitor 2-deoxy-D-glucose (2DG) in experimental models of seizures and epilepsy. Methods Acute anticonvulsant actions of 2DG were assessed in vitro in rat hippocampal slices perfused with 7.5mM [K+]o, 4-aminopyridine (4-AP), or bicuculline and in vivo against seizures evoked by 6 Hz stimulation in mice, audiogenic stimulation in Fring’s mice, and maximal electroshock and subcutaneous Metrazol in rats. Chronic antiepileptic effects of 2DG were evaluated in rats kindled from olfactory bulb or perforant path. Results 2DG (10mM) reduced interictal epileptiform bursts induced by high [K+]o, 4-AP and bicuculline, and electrographic seizures induced by high [K+]o in CA3 of hippocampus. 2DG reduced seizures evoked by 6 Hz stimulation in mice (ED50 = 79.7 mg/kg) and audiogenic stimulation in Fring’s mice (ED50 = 206.4 mg/kg). 2DG exerted chronic antiepileptic action by increasing afterdischarge thresholds in perforant path (but not olfactory bulb) kindling and caused a 2-fold slowing in progression of kindled seizures at both stimulation sites. 2DG did not protect against maximal electroshock or Metrazol seizures. Interpretation The glycolytic inhibitor 2DG exerts acute anticonvulsant and chronic antiepileptic actions and has a novel pattern of effectiveness in preclinical screening models. These results identify metabolic regulation as a potential therapeutic target for seizure suppression and modification of epileptogenesis. PMID:19399874

  12. Elastic models: a comparative study applied to retinal images.

    Science.gov (United States)

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  13. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    . The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... the performance of the studied PV plants with others, the efficiency of the systems has been estimated by both conventional Performance Ratio and Corrected Performance Ratio...

  14. New tips for structure prediction by comparative modeling

    OpenAIRE

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence iden...

  15. Comparing Neural Networks and ARMA Models in Artificial Stock Market

    Czech Academy of Sciences Publication Activity Database

    Krtek, Jiří; Vošvrda, Miloslav

    2011-01-01

    Roč. 18, č. 28 (2011), s. 53-65 ISSN 1212-074X R&D Projects: GA ČR GD402/09/H045 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * vector ARMA * artificial market Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2011/E/krtek-comparing neural networks and arma models in artificial stock market.pdf

  16. A comparative study of the constitutive models for silicon carbide

    Science.gov (United States)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  17. Comparative analysis of existing models for power-grid synchronization

    International Nuclear Information System (INIS)

    Nishikawa, Takashi; Motter, Adilson E

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations. (paper)

  18. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  19. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  20. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  1. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  2. Dinucleotide controlled null models for comparative RNA gene prediction.

    Science.gov (United States)

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz

  3. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats.

    Science.gov (United States)

    Pepple, Kathryn L; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N

    2015-12-01

    Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2-crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2-crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation.

  4. Comparing pharmacophore models derived from crystallography and NMR ensembles

    Science.gov (United States)

    Ghanakota, Phani; Carlson, Heather A.

    2017-11-01

    NMR and X-ray crystallography are the two most widely used methods for determining protein structures. Our previous study examining NMR versus X-Ray sources of protein conformations showed improved performance with NMR structures when used in our Multiple Protein Structures (MPS) method for receptor-based pharmacophores (Damm, Carlson, J Am Chem Soc 129:8225-8235, 2007). However, that work was based on a single test case, HIV-1 protease, because of the rich data available for that system. New data for more systems are available now, which calls for further examination of the effect of different sources of protein conformations. The MPS technique was applied to Growth factor receptor bound protein 2 (Grb2), Src SH2 homology domain (Src-SH2), FK506-binding protein 1A (FKBP12), and Peroxisome proliferator-activated receptor-γ (PPAR-γ). Pharmacophore models from both crystal and NMR ensembles were able to discriminate between high-affinity, low-affinity, and decoy molecules. As we found in our original study, NMR models showed optimal performance when all elements were used. The crystal models had more pharmacophore elements compared to their NMR counterparts. The crystal-based models exhibited optimum performance only when pharmacophore elements were dropped. This supports our assertion that the higher flexibility in NMR ensembles helps focus the models on the most essential interactions with the protein. Our studies suggest that the "extra" pharmacophore elements seen at the periphery in X-ray models arise as a result of decreased protein flexibility and make very little contribution to model performance.

  5. Comparative assessment of condensation models for horizontal tubes

    International Nuclear Information System (INIS)

    Schaffrath, A.; Kruessenberg, A.K.; Lischke, W.; Gocht, U.; Fjodorow, A.

    1999-01-01

    The condensation in horizontal tubes plays an important role e.g. for the determination of the operation mode of horizontal steam generators of VVER reactors or passive safety systems for the next generation of nuclear power plants. Two different approaches (HOTKON and KONWAR) for modeling this process have been undertaken by Forschungszentrum Juelich (FZJ) and University for Applied Sciences Zittau/Goerlitz (HTWS) and implemented into the 1D-thermohydraulic code ATHLET, which is developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH for the analysis of anticipated and abnormal transients in light water reactors. Although the improvements of the condensation models are developed for different applications (VVER steam generators - emergency condenser of the SWR1000) with strongly different operation conditions (e.g. the temperature difference over the tube wall in HORUS is up to 30 K and in NOKO up to 250 K, the heat flux density in HORUS is up to 40 kW/m 2 and in NOKO up to 1 GW/m 2 ) both models are now compared and assessed by Forschungszentrum Rossendorf FZR e.V. Therefore, post test calculations of selected HORUS experiments were performed with ATHLET/KONWAR and compared to existing ATHLET and ATHLET/HOTKON calculations of HTWS. It can be seen that the calculations with the extension KONWAR as well as HOTKON improve significantly the agreement between computational and experimental data. (orig.) [de

  6. Comparative Modelling of the Spectra of Cool Giants

    Science.gov (United States)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  7. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  8. Comparative dynamic analysis of the full Grossman model.

    Science.gov (United States)

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  9. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  10. Comparing Numerical Spall Simulations with a Nonlinear Spall Formation Model

    Science.gov (United States)

    Ong, L.; Melosh, H. J.

    2012-12-01

    Spallation accelerates lightly shocked ejecta fragments to speeds that can exceed the escape velocity of the parent body. We present high-resolution simulations of nonlinear shock interactions in the near surface. Initial results show the acceleration of near-surface material to velocities up to 1.8 times greater than the peak particle velocity in the detached shock, while experiencing little to no shock pressure. These simulations suggest a possible nonlinear spallation mechanism to produce the high-velocity, low show pressure meteorites from other planets. Here we pre-sent the numerical simulations that test the production of spall through nonlinear shock interactions in the near sur-face, and compare the results with a model proposed by Kamegai (1986 Lawrence Livermore National Laboratory Report). We simulate near-surface shock interactions using the SALES_2 hydrocode and the Murnaghan equation of state. We model the shock interactions in two geometries: rectangular and spherical. In the rectangular case, we model a planar shock approaching the surface at a constant angle phi. In the spherical case, the shock originates at a point below the surface of the domain and radiates spherically from that point. The angle of the shock front with the surface is dependent on the radial distance of the surface point from the shock origin. We model the target as a solid with a nonlinear Murnaghan equation of state. This idealized equation of state supports nonlinear shocks but is tem-perature independent. We track the maximum pressure and maximum velocity attained in every cell in our simula-tions and compare them to the Hugoniot equations that describe the material conditions in front of and behind the shock. Our simulations demonstrate that nonlinear shock interactions in the near surface produce lightly shocked high-velocity material for both planar and cylindrical shocks. The spall is the result of the free surface boundary condi-tion, which forces a pressure gradient

  11. Comparative study of computational model for pipe whip analysis

    International Nuclear Information System (INIS)

    Koh, Sugoong; Lee, Young-Shin

    1993-01-01

    Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)

  12. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  13. Comparative benefit of malaria chemoprophylaxis modelled in United Kingdom travellers.

    Science.gov (United States)

    Toovey, Stephen; Nieforth, Keith; Smith, Patrick; Schlagenhauf, Patricia; Adamcova, Miriam; Tatt, Iain; Tomianovic, Danitza; Schnetzler, Gabriel

    2014-01-01

    .3% decrease in estimated infections. The number of travellers experiencing moderate adverse events (AE) or those requiring medical attention or drug withdrawal per case prevented is as follows: C ± P 170, Mq 146, Dx 114, AP 103. The model correctly predicted the number of malaria deaths, providing a robust and reliable estimate of the number of imported malaria cases in the UK, and giving a measure of benefit derived from chemoprophylaxis use against the likely adverse events generated. Overall numbers needed to prevent a malaria infection are comparable among the four options and are sensitive to changes in the background infection rates. Only a limited impact on the number of infections can be expected if Mq is substituted by AP.

  14. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  15. Comparative metabolomics of drought acclimation in model and forage legumes.

    Science.gov (United States)

    Sanchez, Diego H; Schwabe, Franziska; Erban, Alexander; Udvardi, Michael K; Kopka, Joachim

    2012-01-01

    Water limitation has become a major concern for agriculture. Such constraints reinforce the urgent need to understand mechanisms by which plants cope with water deprivation. We used a non-targeted metabolomic approach to explore plastic systems responses to non-lethal drought in model and forage legume species of the Lotus genus. In the model legume Lotus. japonicus, increased water stress caused gradual increases of most of the soluble small molecules profiled, reflecting a global and progressive reprogramming of metabolic pathways. The comparative metabolomic approach between Lotus species revealed conserved and unique metabolic responses to drought stress. Importantly, only few drought-responsive metabolites were conserved among all species. Thus we highlight a potential impediment to translational approaches that aim to engineer traits linked to the accumulation of compatible solutes. Finally, a broad comparison of the metabolic changes elicited by drought and salt acclimation revealed partial conservation of these metabolic stress responses within each of the Lotus species, but only few salt- and drought-responsive metabolites were shared between all. The implications of these results are discussed with regard to the current insights into legume water stress physiology. © 2011 Blackwell Publishing Ltd.

  16. Static response of deformable microchannels: a comparative modelling study

    Science.gov (United States)

    Shidhore, Tanmay C.; Christov, Ivan C.

    2018-02-01

    We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.

  17. COMPAR

    International Nuclear Information System (INIS)

    Kuefner, K.

    1976-01-01

    COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de

  18. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  19. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  20. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  1. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Science.gov (United States)

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  2. Characterizing Cavities in Model Inclusion Fullerenes: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Francisco Torrens

    2001-06-01

    Full Text Available Abstract: The fullerene-82 cavity is selected as a model system in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecular surface, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and cubic lattice approach to the molecular volume. Accurate measures of the molecular volume and surface area have been performed with the pseudorandom Monte Carlo (MCVS and uniform Monte Carlo (UMCVS methods. These calculations serve as a reference for the rest of the methods. The SURMO2 method does not recognize the cavity and may not be convenient for intercalation compounds. The programs that detect the cavities never exceed 1% deviation relative to the reference value for molecular volume and 5% for surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the solvent-accessible surfaces has been calculated. Fullerene-82 is compared with fullerene-60 and -70.

  3. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of-fers major benefits, such as setting up one’s own business and the pos-sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  4. Modeling Conformal Growth in Photonic Crystals and Comparing to Experiment

    Science.gov (United States)

    Brzezinski, Andrew; Chen, Ying-Chieh; Wiltzius, Pierre; Braun, Paul

    2008-03-01

    Conformal growth, e.g. atomic layer deposition (ALD), of materials such as silicon and TiO2 on three dimensional (3D) templates is important for making photonic crystals. However, reliable calculations of optical properties as a function of the conformal growth, such as the optical band structure, are hampered by difficultly in accurately assessing a deposited material's spatial distribution. A widely used approximation ignores ``pinch off'' of precursor gas and assumes complete template infilling. Another approximation results in non-uniform growth velocity by employing iso-intensity surfaces of the 3D interference pattern used to create the template. We have developed an accurate model of conformal growth in arbitrary 3D periodic structures, allowing for arbitrary surface orientation. Results are compared with the above approximations and with experimentally fabricated photonic crystals. We use an SU8 polymer template created by 4-beam interference lithography, onto which various amounts of TiO2 are grown by ALD. Characterization is performed by analysis of cross-sectional scanning electron micrographs and by solid angle resolved optical spectroscopy.

  5. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem-ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of fers major benefits, such as setting up one’s own business and the pos sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  6. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  7. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  8. Comparing satellite SAR and wind farm wake models

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Vincent, P.; Husson, R.

    2015-01-01

    . These extend several tens of kilometres downwind e.g. 70 km. Other SAR wind maps show near-field fine scale details of wake behind rows of turbines. The satellite SAR wind farm wake cases are modelled by different wind farm wake models including the PARK microscale model, the Weather Research and Forecasting...... (WRF) model in high resolution and WRF with coupled microscale parametrization....

  9. A Model of Comparative Ethics Education for Social Workers

    Science.gov (United States)

    Pugh, Greg L.

    2017-01-01

    Social work ethics education models have not effectively engaged social workers in practice in formal ethical reasoning processes, potentially allowing personal bias to affect ethical decisions. Using two of the primary ethical models from medicine, a new social work ethics model for education and practical application is proposed. The strengths…

  10. Models of Purposive Human Organization: A Comparative Study

    Science.gov (United States)

    1984-02-01

    develop techniques for organizational diagnosis with the D-M model, to be followed by intervention by S-T methodology. 2. Introduction 2.1. Background In...relational and object data for Dinnat-Murphree model construction. 2. Develop techniques for organizational diagnosis with the Dinnat-Murphree model

  11. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  12. Comparing Intrinsic Connectivity Models for the Primary Auditory Cortices

    Science.gov (United States)

    Hamid, Khairiah Abdul; Yusoff, Ahmad Nazlim; Mohamad, Mazlyfarina; Hamid, Aini Ismafairus Abd; Manan, Hanani Abd

    2010-07-01

    This fMRI study is about modeling the intrinsic connectivity between Heschl' gyrus (HG) and superior temporal gyrus (STG) in human primary auditory cortices. Ten healthy male subjects participated and required to listen to white noise stimulus during the fMRI scans. Two intrinsic connectivity models comprising bilateral HG and STG were constructed using statistical parametric mapping (SPM) and dynamic causal modeling (DCM). Group Bayes factor (GBF), positive evidence ratio (PER) and Bayesian model selection (BMS) for group studies were used in model comparison. Group results indicated significant bilateral asymmetrical activation (puncorr < 0.001) in HG and STG. Comparison results showed strong evidence of Model 2 as the preferred model (STG as the input center) with GBF value of 5.77 × 1073 The model is preferred by 6 out of 10 subjects. The results were supported by BMS results for group studies. One-sample t-test on connection values obtained from Model 2 indicates unidirectional parallel connections from STG to bilateral HG (p<0.05). Model 2 was determined to be the most probable intrinsic connectivity model between bilateral HG and STG when listening to white noise.

  13. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  14. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  15. Mathematical model comparing of the multi-level economics systems

    Science.gov (United States)

    Brykalov, S. M.; Kryanev, A. V.

    2017-12-01

    The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.

  16. Comparative study of Moore and Mealy machine models adaptation ...

    African Journals Online (AJOL)

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  17. Criteria for comparing economic impact models of tourism

    NARCIS (Netherlands)

    Klijs, J.; Heijman, W.J.M.; Korteweg Maris, D.; Bryon, J.

    2012-01-01

    There are substantial differences between models of the economic impacts of tourism. Not only do the nature and precision of results vary, but data demands, complexity and underlying assumptions also differ. Often, it is not clear whether the models chosen are appropriate for the specific situation

  18. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.

  19. Cost Valuation: A Model for Comparing Dissimilar Aircraft Platforms

    National Research Council Canada - National Science Library

    Long, Eric J

    2006-01-01

    .... A demonstration of the model's validity using aircraft and cost data from the Predator UAV and the F-16 was then performed to illustrate how it can be used to aid comparisons of dissimilar aircraft...

  20. A Comparative Study of Three Methodologies for Modeling Dynamic Stall

    Science.gov (United States)

    Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.

    2002-01-01

    During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.

  1. Comparing single- and dual-process models of memory development.

    Science.gov (United States)

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  2. Comparative Analysis Of Three Largest World Models Of Business Excellence

    Directory of Open Access Journals (Sweden)

    Jasminka Samardžija

    2009-07-01

    Full Text Available Business excellence has become the strongest means of achieving competitive advantage of companies while total management of quality has become the road that ensures support of excellent results recognized by many world companies. Despite many differences, we can conclude that models have many common elements. By the audit in 2005, the DP and MBNQA moved the focus from excellence of product, i.e service, onto the excellence of quality of the entire organization process. Thus, the quality got strategic dimension instead of technical one and the accent passed from the technical quality on the total excellence of all organization processes. The joint movement goes to the direction of good management and appreciation of systems thinking. The very structure of EFOM model criteria itself is adjusted to strategic dimension of quality and that is why the model underwent only short audits within the criteria themselves. Essentially, the model remained unchanged. In all models, the accent is on the satisfaction of buyers, employees and community. National rewards for quality have an important role in promotion and giving a prize to excellence in organization performances. Moreover, they raise quality standards of companies and the country profile as a whole. Considering the GDP per capita and the percentage of certification level of companies, Croatia has all the predispositions for introduction the EFQM model of business excellence with the basic aim of deficit decrease in foreign trade balance and strengthening of competitiveness as the necessary preliminary work for the entrance in the competitive market of the EU. Quality management was introduced in many organizations. The methods used at that time developed in the course of years, and what are to predict is the continuation of the evolution road model as well as the method of business excellence.

  3. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  4. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  5. Energy modeling and comparative assessment beyond the market

    International Nuclear Information System (INIS)

    Rogner, H.-H.; Langlois, L.; McDonald, A.; Jalal, I.

    2004-01-01

    Market participants engage in constant comparative assessment of prices, available supplies, consumer options. Such implicit comparative assessment is a sine qua non for decision making in, and the smooth function of, competitive markets, but it is not always sufficient for policy makers who make decisions based on priorities other than or in addition to market prices. Supplementary mechanisms are needed to make explicit, to expose for consideration and to incorporate into their decision making processes, broader factors that are not necessarily reflected directly in the market price of a good or service. These would include, for example, employment, environment, national security or trade considerations. They would include long-term considerations, e.g., global warming or greatly diminished future supplies of oil and gas. This paper explores different applications of comparative assessment beyond the market, reviews different approaches for accomplishing such evaluations, and presents some tools available for conducting various types of extra-market comparative assessment, including those currently in use by Member States of the IAEA.(author)

  6. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  7. Nature of Science and Models: Comparing Portuguese Prospective Teachers' Views

    Science.gov (United States)

    Torres, Joana; Vasconcelos, Clara

    2015-01-01

    Despite the relevance of nature of science and scientific models in science education, studies reveal that students do not possess adequate views regarding these topics. Bearing in mind that both teachers' views and knowledge strongly influence students' educational experiences, the main scope of this study was to evaluate Portuguese prospective…

  8. Classifying and comparing spatial models of fire dynamics

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan

    2007-01-01

    Wildland fire is a significant disturbance in many ecosystems worldwide and the interaction of fire with climate and vegetation over long time spans has major effects on vegetation dynamics, ecosystem carbon budgets, and patterns of biodiversity. Landscape-Fire-Succession Models (LFSMs) that simulate the linked processes of fire and vegetation development in a spatial...

  9. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  10. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  11. Activity Modelling and Comparative Evaluation of WSN MAC Security Attacks

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    and initiate security attacks that disturb the normal functioning of the network in a severe manner. Such attacks affect the performance of the network by increasing the energy consumption, by reducing throughput and by inducing long delays. Of all existing WSN attacks, MAC layer attacks are considered...... the most harmful as they directly affect the available resources and thus the nodes’ energy consumption. The first endeavour of this paper is to model the activities of MAC layer security attacks to understand the flow of activities taking place when mounting the attack and when actually executing it....... The second aim of the paper is to simulate these attacks on hybrid MAC mechanisms, which shows the performance degradation of aWSN under the considered attacks. The modelling and implementation of the security attacks give an actual view of the network which can be useful in further investigating secure...

  12. Comparing the MOLAP the ROLAP storage models Marysol

    Directory of Open Access Journals (Sweden)

    N Tamayo

    2006-09-01

    Full Text Available Data Warehouses (DWs, supported by OLAP, have played a key role in helping company decision-making du- ring the last few years. DWs can be stored in ROLAP and/or MOLAP data storage systems. Data is stored in a relational database in ROLAP and in multidimensional matrices in MOLAP. This paper presents a comparative example, analysing the performance and advantages and disadvantages of ROLAP and MOLAP in a specific database management system (DBMS. An overview of DBMS is also given to see how these technologies are being incorporated.

  13. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  14. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Comparing droplet activation parameterisations against adiabatic parcel models using a novel inverse modelling framework

    Science.gov (United States)

    Partridge, Daniel; Morales, Ricardo; Stier, Philip

    2015-04-01

    Many previous studies have compared droplet activation parameterisations against adiabatic parcel models (e.g. Ghan et al., 2001). However, these have often involved comparisons for a limited number of parameter combinations based upon certain aerosol regimes. Recent studies (Morales et al., 2014) have used wider ranges when evaluating their parameterisations, however, no study has explored the full possible multi-dimensional parameter space that would be experienced by droplet activations within a global climate model (GCM). It is important to be able to efficiently highlight regions of the entire multi-dimensional parameter space in which we can expect the largest discrepancy between parameterisation and cloud parcel models in order to ascertain which regions simulated by a GCM can be expected to be a less accurate representation of the process of cloud droplet activation. This study provides a new, efficient, inverse modelling framework for comparing droplet activation parameterisations to more complex cloud parcel models. To achieve this we couple a Markov Chain Monte Carlo algorithm (Partridge et al., 2012) to two independent adiabatic cloud parcel models and four droplet activation parameterisations. This framework is computationally faster than employing a brute force Monte Carlo simulation, and allows us to transparently highlight which parameterisation provides the closest representation across all aerosol physiochemical and meteorological environments. The parameterisations are demonstrated to perform well for a large proportion of possible parameter combinations, however, for certain key parameters; most notably the vertical velocity and accumulation mode aerosol concentration, large discrepancies are highlighted. These discrepancies correspond for parameter combinations that result in very high/low simulated values of maximum supersaturation. By identifying parameter interactions or regimes within the multi-dimensional parameter space we hope to guide

  17. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  18. Comparative study of cost models for tokamak DEMO fusion reactors

    International Nuclear Information System (INIS)

    Oishi, Tetsutarou; Yamazaki, Kozo; Arimoto, Hideki; Ban, Kanae; Kondo, Takuya; Tobita, Kenji; Goto, Takuya

    2012-01-01

    Cost evaluation analysis of the tokamak-type demonstration reactor DEMO using the PEC (physics-engineering-cost) system code is underway to establish a cost evaluation model for the DEMO reactor design. As a reference case, a DEMO reactor with reference to the SSTR (steady state tokamak reactor) was designed using PEC code. The calculated total capital cost was in the same order of that proposed previously in cost evaluation studies for the SSTR. Design parameter scanning analysis and multi regression analysis illustrated the effect of parameters on the total capital cost. The capital cost was predicted to be inside the range of several thousands of M$s in this study. (author)

  19. Comparative digital cartilage histology for human and common osteoarthritis models

    Directory of Open Access Journals (Sweden)

    Pedersen DR

    2013-02-01

    Full Text Available Douglas R Pedersen, Jessica E Goetz, Gail L Kurriger, James A MartinDepartment of Orthopaedics and Rehabilitation, University of Iowa, Iowa City, IA, USAPurpose: This study addresses the species-specific and site-specific details of weight-bearing articular cartilage zone depths and chondrocyte distributions among humans and common osteoarthritis (OA animal models using contemporary digital imaging tools. Histological analysis is the gold-standard research tool for evaluating cartilage health, OA severity, and treatment efficacy. Historically, evaluations were made by expert analysts. However, state-of-the-art tools have been developed that allow for digitization of entire histological sections for computer-aided analysis. Large volumes of common digital cartilage metrics directly complement elucidation of trends in OA inducement and concomitant potential treatments.Materials and methods: Sixteen fresh human knees, 26 adult New Zealand rabbit stifles, and 104 bovine lateral plateaus were measured for four cartilage zones and the cell densities within each zone. Each knee was divided into four weight-bearing sites: the medial and lateral plateaus and femoral condyles.Results: One-way analysis of variance followed by pairwise multiple comparisons (Holm–Sidak method at a significance of 0.05 clearly confirmed the variability between cartilage depths at each site, between sites in the same species, and between weight-bearing articular cartilage definitions in different species.Conclusion: The present study clearly demonstrates multisite, multispecies differences in normal weight-bearing articular cartilage, which can be objectively quantified by a common digital histology imaging technique. The clear site-specific differences in normal cartilage must be taken into consideration when characterizing the pathoetiology of OA models. Together, these provide a path to consistently analyze the volume and variety of histologic slides necessarily generated

  20. Comparing deep learning models for population screening using chest radiography

    Science.gov (United States)

    Sivaramakrishnan, R.; Antani, Sameer; Candemir, Sema; Xue, Zhiyun; Abuya, Joseph; Kohli, Marc; Alderson, Philip; Thoma, George

    2018-02-01

    According to the World Health Organization (WHO), tuberculosis (TB) remains the most deadly infectious disease in the world. In a 2015 global annual TB report, 1.5 million TB related deaths were reported. The conditions worsened in 2016 with 1.7 million reported deaths and more than 10 million people infected with the disease. Analysis of frontal chest X-rays (CXR) is one of the most popular methods for initial TB screening, however, the method is impacted by the lack of experts for screening chest radiographs. Computer-aided diagnosis (CADx) tools have gained significance because they reduce the human burden in screening and diagnosis, particularly in countries that lack substantial radiology services. State-of-the-art CADx software typically is based on machine learning (ML) approaches that use hand-engineered features, demanding expertise in analyzing the input variances and accounting for the changes in size, background, angle, and position of the region of interest (ROI) on the underlying medical imagery. More automatic Deep Learning (DL) tools have demonstrated promising results in a wide range of ML applications. Convolutional Neural Networks (CNN), a class of DL models, have gained research prominence in image classification, detection, and localization tasks because they are highly scalable and deliver superior results with end-to-end feature extraction and classification. In this study, we evaluated the performance of CNN based DL models for population screening using frontal CXRs. The results demonstrate that pre-trained CNNs are a promising feature extracting tool for medical imagery including the automated diagnosis of TB from chest radiographs but emphasize the importance of large data sets for the most accurate classification.

  1. The chemical induction of seizures in psychiatric therapy: were flurothyl (indoklon) and pentylenetetrazol (metrazol) abandoned prematurely?

    Science.gov (United States)

    Cooper, Kathryn; Fink, Max

    2014-10-01

    Camphor-induced and pentylenetetrazol-induced brain seizures were first used to relieve psychiatric illnesses in 1934. Electrical inductions (electroconvulsive therapy, ECT) followed in 1938. These were easier and less expensive to administer and quickly became the main treatment method. In 1957, seizure induction with the inhalant anesthetic flurothyl was tested and found to be clinically effective.For many decades, complaints of memory loss have stigmatized and inhibited ECT use. Many variations of electricity in form, electrode placement, dosing, and stimulation method offered some relief, but complaints still limit its use. The experience with chemical inductions of seizures was reviewed based on searches for reports of each agent in Medline and in the archival files of original studies by the early investigators. Camphor injections were inefficient and were rapidly replaced by pentylenetetrazol. These were effective but difficult to administer. Flurothyl inhalation-induced seizures were as clinically effective as electrical inductions with lesser effects on memory functions. Flurothyl inductions were discarded because of the persistence of the ethereal aroma and the fears induced in the professional staff that they might seize. Persistent complaints of memory loss plague electricity induced seizures. Flurothyl induced seizures are clinically as effective without the memory effects associated with electricity. Reexamination of seizure inductions using flurothyl in modern anesthesia facilities is encouraged to relieve medication-resistant patients with mood disorders and catatonia.

  2. Comparative analysis of Klafki and Heimann's didactic models

    Directory of Open Access Journals (Sweden)

    Bojović Žana P.

    2016-01-01

    Full Text Available A comparative analysis of Klafki's didactic thinking which is based on an analysis of different kinds of theories on the nature of education and Heimann's didactic which is based on the theory of teaching and learning shows that both are dealing with teaching in its entirety. Both authors emphasize the role of contents, methods, procedures and resources for material and formal education and both use anthropological and social reality as their starting point. According to Klafki, resources, procedures, and methods are in form of dependency where it is important to know what and why should something be learnt, whereas Heimann sees the same elements in the form of interdependency. Each of the didactic conceptions, from their point of view, define the position of goals and tasks in education as well as how to achieve them. Determination and formulation of objectives is a complex, responsible, and very difficult task, and a goal must be clearly defined, because it emanates the guidelines for the preparation of didactic methodology educational programs and their planning. The selection of content in didactic methodology scenarios of education and learning, are only possible if the knowledge, skills and abilities that are necessary for a student to develop are explicitly indicated. The question of educational goals is the main problem of didactics for only a clearly defined objective implicates the selection of appropriate methods and means for its achievement, and it should be a permanent task of the current didactic conception now and in the future.

  3. Characterizing cavities in model inclusion molecules: a comparative study.

    Science.gov (United States)

    Torrens, F; Sánchez-Marín, J; Nebot-Gil, I

    1998-04-01

    We have selected fullerene-60 and -70 cavities as model systems in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecule taken as a unitary sphere, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and a cubic lattice approach to a molecular space. Accurate measures of the molecular volume and surface area have been performed with the pseudo-random Monte Carlo (MCVS) and uniform Monte Carlo (UMCVS) methods. These calculations serve as a reference for the rest of the methods. The SURMO2 and MS methods have not recognized the cavities and may not be convenient for intercalation compounds. The programs that have detected the cavities never exceed 5% deviation relative to the reference values for molecular volume and surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the accessible surfaces has been calculated.

  4. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  5. A comparative analysis of several vehicle emission models for road freight transportation

    NARCIS (Netherlands)

    Demir, E.; Bektas, T.; Laporte, G.

    2011-01-01

    Reducing greenhouse gas emissions in freight transportation requires using appropriate emission models in the planning process. This paper reviews and numerically compares several available freight transportation vehicle emission models and also considers their outputs in relations to field studies.

  6. Comparative analysis of methods and tools for open and closed fuel cycles modeling: MESSAGE and DESAE

    International Nuclear Information System (INIS)

    Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.

    2006-01-01

    Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru

  7. Comparing of four IRT models when analyzing two tests for inductive reasoning

    NARCIS (Netherlands)

    de Koning, E.; Sijtsma, K.; Hamers, J.H.M.

    2002-01-01

    This article discusses the use of the nonparametric IRT Mokken models of monotone homogeneity and double monotonicity and the parametric Rasch and Verhelst models for the analysis of binary test data. First, the four IRT models are discussed and compared at the theoretical level, and for each model,

  8. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  9. A Comparative Study of Neural Networks and Fuzzy Systems in Modeling of a Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Metin Demirtas

    2011-07-01

    Full Text Available The aim of this paper is to compare the neural networks and fuzzy modeling approaches on a nonlinear system. We have taken Permanent Magnet Brushless Direct Current (PMBDC motor data and have generated models using both approaches. The predictive performance of both methods was compared on the data set for model configurations. The paper describes the results of these tests and discusses the effects of changing model parameters on predictive and practical performance. Modeling sensitivity was used to compare for two methods.

  10. Comparing Apples to Apples: Paleoclimate Model-Data comparison via Proxy System Modeling

    Science.gov (United States)

    Dee, Sylvia; Emile-Geay, Julien; Evans, Michael; Noone, David

    2014-05-01

    The wealth of paleodata spanning the last millennium (hereinafter LM) provides an invaluable testbed for CMIP5-class GCMs. However, comparing GCM output to paleodata is non-trivial. High-resolution paleoclimate proxies generally contain a multivariate and non-linear response to regional climate forcing. Disentangling the multivariate environmental influences on proxies like corals, speleothems, and trees can be complex due to spatiotemporal climate variability, non-stationarity, and threshold dependence. Given these and other complications, many paleodata-GCM comparisons take a leap of faith, relating climate fields (e.g. precipitation, temperature) to geochemical signals in proxy data (e.g. δ18O in coral aragonite or ice cores) (e.g. Braconnot et al., 2012). Isotope-enabled GCMs are a step in the right direction, with water isotopes providing a connector point between GCMs and paleodata. However, such studies are still rare, and isotope fields are not archived as part of LM PMIP3 simulations. More importantly, much of the complexity in how proxy systems record and transduce environmental signals remains unaccounted for. In this study we use proxy system models (PSMs, Evans et al., 2013) to bridge this conceptual gap. A PSM mathematically encodes the mechanistic understanding of the physical, geochemical and, sometimes biological influences on each proxy. To translate GCM output to proxy space, we have synthesized a comprehensive, consistently formatted package of published PSMs, including δ18O in corals, tree ring cellulose, speleothems, and ice cores. Each PSM is comprised of three sub-models: sensor, archive, and observation. For the first time, these different components are coupled together for four major proxy types, allowing uncertainties due to both dating and signal interpretation to be treated within a self-consistent framework. The output of this process is an ensemble of many (say N = 1,000) realizations of the proxy network, all equally plausible

  11. Comparing i-Tree modeled ozone deposition with field measurements in a periurban Mediterranean forest

    Science.gov (United States)

    A. Morani; D. Nowak; S. Hirabayashi; G. Guidolotti; M. Medori; V. Muzzini; S. Fares; G. Scarascia Mugnozza; C. Calfapietra

    2014-01-01

    Ozone flux estimates from the i-Tree model were compared with ozone flux measurements using the Eddy Covariance technique in a periurban Mediterranean forest near Rome (Castelporziano). For the first time i-Tree model outputs were compared with field measurements in relation to dry deposition estimates. Results showed generally a...

  12. Comparing the Applicability of Commonly Used Hydrological Ecosystem Services Models for Integrated Decision-Support

    Directory of Open Access Journals (Sweden)

    Anna Lüke

    2018-01-01

    Full Text Available Different simulation models are used in science and practice in order to incorporate hydrological ecosystem services in decision-making processes. This contribution compares three simulation models, the Soil and Water Assessment Tool, a traditional hydrological model and two ecosystem services models, the Integrated Valuation of Ecosystem Services and Trade-offs model and the Resource Investment Optimization System model. The three models are compared on a theoretical and conceptual basis as well in a comparative case study application. The application of the models to a study area in Nicaragua reveals that a practical benefit to apply these models for different questions in decision-making generally exists. However, modelling of hydrological ecosystem services is associated with a high application effort and requires input data that may not always be available. The degree of detail in temporal and spatial variability in ecosystem service provision is higher when using the Soil and Water Assessment Tool compared to the two ecosystem service models. In contrast, the ecosystem service models have lower requirements on input data and process knowledge. A relationship between service provision and beneficiaries is readily produced and can be visualized as a model output. The visualization is especially useful for a practical decision-making context.

  13. Comparative evaluation of life cycle assessment models for solid waste management

    International Nuclear Information System (INIS)

    Winkler, Joerg; Bilitewski, Bernd

    2007-01-01

    This publication compares a selection of six different models developed in Europe and America by research organisations, industry associations and governmental institutions. The comparison of the models reveals the variations in the results and the differences in the conclusions of an LCA study done with these models. The models are compared by modelling a specific case - the waste management system of Dresden, Germany - with each model and an in-detail comparison of the life cycle inventory results. Moreover, a life cycle impact assessment shows if the LCA results of each model allows for comparable and consecutive conclusions, which do not contradict the conclusions derived from the other models' results. Furthermore, the influence of different level of detail in the life cycle inventory of the life cycle assessment is demonstrated. The model comparison revealed that the variations in the LCA results calculated by the models for the case show high variations and are not negligible. In some cases the high variations in results lead to contradictory conclusions concerning the environmental performance of the waste management processes. The static, linear modelling approach chosen by all models analysed is inappropriate for reflecting actual conditions. Moreover, it was found that although the models' approach to LCA is comparable on a general level, the level of detail implemented in the software tools is very different

  14. A comparative study of two fast nonlinear free-surface water wave models

    DEFF Research Database (Denmark)

    Ducrozet, Guillaume; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    simply directly solves the three-dimensional problem. Both models have been well validated on standard test cases and shown to exhibit attractive convergence properties and an optimal scaling of the computational effort with increasing problem size. These two models are compared for solution of a typical...... used in OceanWave3D, the closer the results come to the HOS model....

  15. A comparative study of two phenomenological models of dephasing in series and parallel resistors

    International Nuclear Information System (INIS)

    Bandopadhyay, Swarnali; Chaudhuri, Debasish; Jayannavar, Arun M.

    2010-01-01

    We compare two recent phenomenological models of dephasing using a double barrier and a quantum ring geometry. While the stochastic absorption model generates controlled dephasing leading to Ohm's law for large dephasing strengths, a Gaussian random phase based statistical model shows many inconsistencies.

  16. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  17. Comparing supply-side specifications in models of global agriculture and the food system

    NARCIS (Netherlands)

    Robinson, S.; Meijl, van J.C.M.; Willenbockel, D.; Valin, H.; Fujimori, S.; Masui, T.; Sands, R.; Wise, M.; Calvin, K.V.; Mason d'Croz, D.; Tabeau, A.A.; Kavallari, A.; Schmitz, C.; Dietrich, J.P.; Lampe, von M.

    2014-01-01

    This article compares the theoretical and functional specification of production in partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two model families differ in their scope—partial

  18. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Science.gov (United States)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  19. Prospective comparative effectiveness cohort study comparing two models of advance care planning provision for Australian community aged care clients.

    Science.gov (United States)

    Detering, Karen Margaret; Carter, Rachel Zoe; Sellars, Marcus William; Lewis, Virginia; Sutton, Elizabeth Anne

    2017-12-01

    Conduct a prospective comparative effectiveness cohort study comparing two models of advance care planning (ACP) provision in community aged care: ACP conducted by the client's case manager (CM) ('Facilitator') and ACP conducted by an external ACP service ('Referral') over a 6-month period. This Australian study involved CMs and their clients. Eligible CM were English speaking, ≥18 years, had expected availability for the trial and worked ≥3 days per week. CMs were recruited via their organisations, sequentially allocated to a group and received education based on the group allocation. They were expected to initiate ACP with all clients and to facilitate ACP or refer for ACP. Outcomes were quantity of new ACP conversations and quantity and quality of new advance care directives (ACDs). 30 CMs (16 Facilitator, 14 Referral) completed the study; all 784 client's files (427 Facilitator, 357 Referral) were audited. ACP was initiated with 508 (65%) clients (293 Facilitator, 215 Referral; p<0.05); 89 (18%) of these (53 Facilitator, 36 Referral) and 41 (46%) (13 Facilitator, 28 Referral; p<0.005) completed ACDs. Most ACDs (71%) were of poor quality/not valid. A further 167 clients (facilitator 124; referral 43; p<0.005) reported ACP was in progress at study completion. While there were some differences, overall, models achieved similar outcomes. ACP was initiated with 65% of clients. However, fewer clients completed ACP, there was low numbers of ACDs and document quality was generally poor. The findings raise questions for future implementation and research into community ACP provision. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    OpenAIRE

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-01-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated healt...

  1. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    Science.gov (United States)

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  2. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  3. Comparative studies on constitutive models for cohesive interface cracks of quasi-brittle materials

    International Nuclear Information System (INIS)

    Shen Xinpu; Shen Guoxiao; Zhou Lin

    2005-01-01

    In this paper, Concerning on the modelling of quasi-brittle fracture process zone at interface crack of quasi-brittle materials and structures, typical constitutive models of interface cracks were compared. Numerical calculations of the constitutive behaviours of selected models were carried out at local level. Aiming at the simulation of quasi-brittle fracture of concrete-like materials and structures, the emphases of the qualitative comparisons of selected cohesive models are focused on: (1) the fundamental mode I and mode II behaviours of selected models; (2) dilatancy properties of the selected models under mixed mode fracture loading conditions. (authors)

  4. Assessment and Challenges of Ligand Docking into Comparative Models of G-Protein Coupled Receptors

    DEFF Research Database (Denmark)

    Nguyen, E.D.; Meiler, J.; Norn, C.

    2013-01-01

    screening and to design and optimize drug candidates. However, low sequence identity between receptors, conformational flexibility, and chemical diversity of ligands present an enormous challenge to molecular modeling approaches. It is our hypothesis that rapid Monte-Carlo sampling of protein backbone...... extracellular loop. Furthermore, these models are consistently correlated with low Rosetta energy score. To predict their binding modes, ligand conformers of the 14 ligands co-crystalized with the GPCRs were docked against the top ranked comparative models. In contrast to the comparative models themselves...

  5. Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach

    Directory of Open Access Journals (Sweden)

    Dan Zhou

    2015-01-01

    Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.

  6. Comparing Epileptiform Behavior of Mesoscale Detailed Models and Population Models of Neocortex

    NARCIS (Netherlands)

    Visser, S.; Meijer, Hil Gaétan Ellart; Lee, Hyong C.; van Drongelen, Wim; van Putten, Michel Johannes Antonius Maria; van Gils, Stephanus A.

    2010-01-01

    Two models of the neocortex are developed to study normal and pathologic neuronal activity. One model contains a detailed description of a neocortical microcolumn represented by 656 neurons, including superficial and deep pyramidal cells, four types of inhibitory neurons, and realistic synaptic

  7. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  8. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  9. Comparative study of boron transport models in NRC Thermal-Hydraulic Code Trace

    Energy Technology Data Exchange (ETDEWEB)

    Olmo-Juan, Nicolás; Barrachina, Teresa; Miró, Rafael; Verdú, Gumersindo; Pereira, Claubia, E-mail: nioljua@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es, E-mail: claubia@nuclear.ufmg.br [Institute for Industrial, Radiophysical and Environmental Safety (ISIRYM). Universitat Politècnica de València (Spain); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Recently, the interest in the study of various types of transients involving changes in the boron concentration inside the reactor, has led to an increase in the interest of developing and studying new models and tools that allow a correct study of boron transport. Therefore, a significant variety of different boron transport models and spatial difference schemes are available in the thermal-hydraulic codes, as TRACE. According to this interest, in this work it will be compared the results obtained using the different boron transport models implemented in the NRC thermal-hydraulic code TRACE. To do this, a set of models have been created using the different options and configurations that could have influence in boron transport. These models allow to reproduce a simple event of filling or emptying the boron concentration in a long pipe. Moreover, with the aim to compare the differences obtained when one-dimensional or three-dimensional components are chosen, it has modeled many different cases using only pipe components or a mix of pipe and vessel components. In addition, the influence of the void fraction in the boron transport has been studied and compared under close conditions to BWR commercial model. A final collection of the different cases and boron transport models are compared between them and those corresponding to the analytical solution provided by the Burgers equation. From this comparison, important conclusions are drawn that will be the basis of modeling the boron transport in TRACE adequately. (author)

  10. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  11. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    Science.gov (United States)

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device. © The Author(s) 2015.

  12. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  13. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  14. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    Science.gov (United States)

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  15. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  16. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    Science.gov (United States)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  17. What can be learned from computer modeling? Comparing expository and modeling approaches to teaching dynamic systems behavior

    NARCIS (Netherlands)

    van Borkulo, S.P.; van Joolingen, W.R.; Savelsbergh, E.R.; de Jong, T.

    2012-01-01

    Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment

  18. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  19. Experience gained with the application of the MODIS diffusion model compared with the ATMOS Gauss-function-based model

    International Nuclear Information System (INIS)

    Mueller, A.

    1985-01-01

    The advantage of the Gauss-function-based models doubtlessly consists in their proven propagation parameter sets and empirical stack plume rise formulas and in their easy matchability and handability. However, grid models based on trace matter transport equation are more convincing concerning their fundamental principle. Grid models of the MODIS type are to acquire a practical applicability comparable to Gauss models by developing techniques allowing to consider the vertical self-movement of the plumes in grid models and to secure improved diffusion co-efficient determination. (orig./PW) [de

  20. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive

  1. The Development of Working Memory: Further Note on the Comparability of Two Models of Working Memory.

    Science.gov (United States)

    de Ribaupierre, Anik; Bailleux, Christine

    2000-01-01

    Summarizes similarities and differences between the working memory models of Pascual-Leone and Baddeley. Debates whether each model makes a specific contribution to explanation of Kemps, De Rammelaere, and Desmet's results. Argues for necessity of theoretical task analyses. Compares a study similar to that of Kemps et al. in which different…

  2. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    Science.gov (United States)

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  3. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  4. Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Holstein-Rathlou, N H; Marsh, D J

    1998-01-01

    kernel estimation method based on Laguerre expansions. The results for the two types of artificial neural networks and the Volterra models are comparable in terms of normalized mean square error (NMSE) of the respective output prediction for independent testing data. However, the Volterra models obtained...

  5. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    Science.gov (United States)

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  6. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Science.gov (United States)

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  7. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  8. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  9. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  10. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Directory of Open Access Journals (Sweden)

    Daminov Ildar

    2016-01-01

    Full Text Available This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  11. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  12. Comparative analysis of diffused solar radiation models for optimum tilt angle determination for Indian locations

    International Nuclear Information System (INIS)

    Yadav, P.; Chandel, S.S.

    2014-01-01

    Tilt angle and orientation greatly are influenced on the performance of the solar photo voltaic panels. The tilt angle of solar photovoltaic panels is one of the important parameters for the optimum sizing of solar photovoltaic systems. This paper analyses six different isotropic and anisotropic diffused solar radiation models for optimum tilt angle determination. The predicted optimum tilt angles are compared with the experimentally measured values for summer season under outdoor conditions. The Liu and Jordan model is found to exhibit t lowest error as compared to other models for the location. (author)

  13. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  14. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  15. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    Science.gov (United States)

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  17. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    Science.gov (United States)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  18. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago.

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E; Pellissier, Loïc

    2018-03-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns.

  19. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E.

    2018-01-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns. PMID:29657753

  20. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  1. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  2. A comparative analysis of diffusion and transport models applying to releases in the marine environment

    International Nuclear Information System (INIS)

    Mejon, M.J.

    1984-05-01

    This study is a contribution to the development of methodologies allowing to assess the radiological impact of liquid effluent releases from nuclear power plants. It first concerns hydrodynamics models and their applications to the North sea, which is of great interest to the European Community. Starting from basic equations of geophysical fluid mechanics, the assumptions made at each step in order to simplifly resolution are analysed and commented. The results published on the application of the Liege University models (NIHOUL, RONDAY et al.) are compared to observations both on tides and tempests and residual circulation which is responsible for the long-terme transport of pollutants. The results for residual circulation compare satisfactorily, and the expected accuracy of the other models is indicated. A dispersion model by the same authors is then studied with a numerical integration method using a moving grid. Others models (Laboratoire National d'Hydraulique, EDF) used for the Channel, are also presented [fr

  3. Comparing the engineering program feeders from SiF and convention models

    Science.gov (United States)

    Roongruangsri, Warawaran; Moonpa, Niwat; Vuthijumnonk, Janyawat; Sangsuwan, Kampanart

    2018-01-01

    This research aims to compare the relationship between two types of engineering program feeder models within the technical education systems of Rajamangala University of Technology Lanna (RMUTL), Chiangmai, Thailand. To illustrate, the paper refers to two typologies of feeder models, which are the convention and the school in factory (SiF) models. The new SiF model is developed through a collaborative educational process between the sectors of industry, government and academia, using work-integrated learning. The research methodology were use to compared features of the the SiF model with conventional models in terms of learning outcome, funding budget for the study, the advantages and disadvantages from the point of view of students, professors, the university, government and industrial partners. The results of this research indicate that the developed SiF feeder model is the most pertinent ones as it meet the requirements of the university, the government and the industry. The SiF feeder model showed the ability to yield positive learning outcomes with low expenditures per student for both the family and the university. In parallel, the sharing of knowledge between university and industry became increasingly important in the process, which resulted in the improvement of industrial skills for professors and an increase in industrial based research for the university. The SiF feeder model meets its demand of public policy in supporting a skilled workforce for the industry, which could be an effective tool for the triple helix educational model of Thailand.

  4. Towards a systemic functional model for comparing forms of discourse in academic writing Towards a systemic functional model for comparing forms of discourse in academic writing

    Directory of Open Access Journals (Sweden)

    Meriel Bloor

    2008-04-01

    Full Text Available This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered and the implications of the findings are related to models of text and discourse. Recommendations are made for developing domain models that relate clusters of features to positions on a cline. This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered

  5. Impact of rotavirus vaccination on hospitalisations in Belgium: comparing model predictions with observed data.

    Directory of Open Access Journals (Sweden)

    Baudouin Standaert

    Full Text Available BACKGROUND: Published economic assessments of rotavirus vaccination typically use modelling, mainly static Markov cohort models with birth cohorts followed up to the age of 5 years. Rotavirus vaccination has now been available for several years in some countries, and data have been collected to evaluate the real-world impact of vaccination on rotavirus hospitalisations. This study compared the economic impact of vaccination between model estimates and observed data on disease-specific hospitalisation reductions in a country for which both modelled and observed datasets exist (Belgium. METHODS: A previously published Markov cohort model estimated the impact of rotavirus vaccination on the number of rotavirus hospitalisations in children aged <5 years in Belgium using vaccine efficacy data from clinical development trials. Data on the number of rotavirus-positive gastroenteritis hospitalisations in children aged <5 years between 1 June 2004 and 31 May 2006 (pre-vaccination study period or 1 June 2007 to 31 May 2010 (post-vaccination study period were analysed from nine hospitals in Belgium and compared with the modelled estimates. RESULTS: The model predicted a smaller decrease in hospitalisations over time, mainly explained by two factors. First, the observed data indicated indirect vaccine protection in children too old or too young for vaccination. This herd effect is difficult to capture in static Markov cohort models and therefore was not included in the model. Second, the model included a 'waning' effect, i.e. reduced vaccine effectiveness over time. The observed data suggested this waning effect did not occur during that period, and so the model systematically underestimated vaccine effectiveness during the first 4 years after vaccine implementation. CONCLUSIONS: Model predictions underestimated the direct medical economic value of rotavirus vaccination during the first 4 years of vaccination by approximately 10% when assessing

  6. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  7. A comparative study of the use of different risk-assessment models in Danish municipalities

    DEFF Research Database (Denmark)

    Sørensen, Kresta Munkholt

    2018-01-01

    Risk-assessment models are widely used in casework involving vulnerable children and families. Internationally, there are a number of different kinds of models with great variation in regard to the characteristics of factors that harm children. Lists of factors have been made but most of them give...... very little advice on how the factors should be weighted. This paper will address the use of risk-assessment models in six different Danish municipalities. The paper presents a comparative analysis and discussion of differences and similarities between three models: the Integrated Children’s System...... (ICS), the Signs of Safety (SoS) model and models developed by the municipalities themselves (MM). The analysis will answer the following two key questions: (i) to which risk and protective factors do the caseworkers give most weight in the risk assessment? and (ii) does each of the different models...

  8. COMPARATIVE EFFICIENCIES STUDY OF SLOT MODEL AND MOUSE MODEL IN PRESSURISED PIPE FLOW

    Directory of Open Access Journals (Sweden)

    Saroj K. Pandit

    2014-01-01

    Full Text Available The flow in sewers is unsteady and variable between free-surfac e to full pipe pressurized flow. Sewers are designed on the basis of free surf ace flow (gravity flow however they may carry pressurized flow. Preissmann Slot concep t is widely used numerical approach in unsteady free surface-pressurized flow as it provides the advantage of using free surface flow as a single type flow. Slo t concept uses the Saint- Venant’s equations as a basic equation for one-dimensional unst eady free surface flow. This paper includes two different numerical models using Saint Venant’s equations. The Saint Venant’s e quations of continuity and momen tum are solved by the Method of Characteristics and presented in forms for direct substitution into FORTRAN programming for numerical analysis in the first model. The MOUSE model carries out computation of unsteady flows which is founde d on an implicit, finite difference numerical solut ion of the basic one dimension al Saint Venant’s equations of free surface flow. The simulation results are comp ared to analyze the nature and degree of errors for further improvement.

  9. COMPARATIVE ANALYSIS BETWEEN THE TRADITIONAL MODEL OF CORPORATE GOVERNANCE AND ISLAMIC MODEL

    Directory of Open Access Journals (Sweden)

    DAN ROXANA LOREDANA

    2016-08-01

    Full Text Available Corporate governance represents a set of processes and policies by which a company is administered, controlled and directed to achieve the predetermined management objectives settled by the shareholders. The most important benefits of the corporate governance to the organisations are related to business success, investor confidence and minimisation of wastage. For business, the improved controls and decision-making will aid corporate success as well as growth in revenues and profits. For the investor confidence, corporate governance will mean that investors are more likely to trust that the company is being well run. This will not only make it easier and cheaper for the company to raise finance, but also has a positive effect on the share price. When we talk about the minimisation of wastage we relate to the strong corporate governance that should help to minimise waste within the organisation, as well as the corruption, risks and mismanagement. Thus, in our research, we are trying to determine the common elements, and also, the differences that have occured between two well known models of corporate governance, the traditional Anglo – Saxon model and also, the Islamic model of corporate governance.

  10. A comparative study of turbulence models for dissolved air flotation flow analysis

    International Nuclear Information System (INIS)

    Park, Min A; Lee, Kyun Ho; Chung, Jae Dong; Seo, Seung Ho

    2015-01-01

    The dissolved air flotation (DAF) system is a water treatment process that removes contaminants by attaching micro bubbles to them, causing them to float to the water surface. In the present study, two-phase flow of air-water mixture is simulated to investigate changes in the internal flow analysis of DAF systems caused by using different turbulence models. Internal micro bubble distribution, velocity, and computation time are compared between several turbulence models for a given DAF geometry and condition. As a result, it is observed that the standard κ-ε model, which has been frequently used in previous research, predicts somewhat different behavior than other turbulence models

  11. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    Science.gov (United States)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  12. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  13. Comparative analysis of coupled creep-damage model implementations and application

    International Nuclear Information System (INIS)

    Bhandari, S.; Feral, X.; Bergheau, J.M.; Mottet, G.; Dupas, P.; Nicolas, L.

    1998-01-01

    Creep rupture of a reactor pressure vessel in a severe accident occurs after complex load and temperature histories leading to interactions between creep deformations, stress relaxation, material damaging and plastic instability. The concepts of continuous damage introduced by Kachanov and Robotnov allow to formulate models coupling elasto-visco-plasticity and damage. However, the integration of such models in a finite element code creates some difficulties related to the strong non-linearity of the constitutive equations. It was feared that different methods of implementation of such a model might lead to different results which, consequently, might limit the application and usefulness of such a model. The Commissariat a l'Energie Atomique (CEA), Electricite de France (EDF) and Framasoft (FRA) have worked out numerical solutions to implement such a model in respectively CASTEM 2000, ASTER and SYSTUS codes. A ''benchmark'' was set up, chosen on the basis of a cylinder studied in the programme ''RUPTHER''. The aim of this paper is not to enter into the numerical details of the implementation of the model, but to present the results of the comparative study made using the three codes mentioned above, on a case of engineering interest. The results of the coupled model will also be compared to an uncoupled model to evaluate differences one can obtain between a simple uncoupled model and a more sophisticated coupled model. The main conclusion drawn from this study is that the different numerical implementations used for the coupled damage-visco-plasticity model give quite consistent results. The numerical difficulty inherent to the integration of the strongly non-linear constitutive equations have been resolved using Runge-Kutta or mid-point rule. The usefulness of the coupled model comes from the fact the uncoupled model leads to too conservative results, at least in the example treated and in particular for the uncoupled analysis under the hypothesis of the small

  14. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  15. A comparative study of manhole hydraulics using stereoscopic PIV and different RANS models.

    Science.gov (United States)

    Beg, Md Nazmul Azim; Carvalho, Rita F; Tait, Simon; Brevis, Wernher; Rubinato, Matteo; Schellart, Alma; Leandro, Jorge

    2017-04-01

    Flows in manholes are complex and may include swirling and recirculation flow with significant turbulence and vorticity. However, how these complex 3D flow patterns could generate different energy losses and so affect flow quantity in the wider sewer network is unknown. In this work, 2D3C stereo Particle Image Velocimetry measurements are made in a surcharged scaled circular manhole. A computational fluid dynamics (CFD) model in OpenFOAM ® with four different Reynolds Averaged Navier Stokes (RANS) turbulence model is constructed using a volume of fluid model, to represent flows in this manhole. Velocity profiles and pressure distributions from the models are compared with the experimental data in view of finding the best modelling approach. It was found among four different RANS models that the re-normalization group (RNG) k-ɛ and k-ω shear stress transport (SST) gave a better approximation for velocity and pressure.

  16. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  17. Comparative systems biology between human and animal models based on next-generation sequencing methods.

    Science.gov (United States)

    Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei

    2013-04-01

    Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.

  18. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    OpenAIRE

    Sorapak Pukdesree

    2017-01-01

    The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two g...

  19. Efem vs. XFEM: a comparative study for modeling strong discontinuity in geomechanics

    OpenAIRE

    Das, Kamal C.; Ausas, Roberto Federico; Segura Segarra, José María; Narang, Ankur; Rodrigues, Eduardo; Carol, Ignacio; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2015-01-01

    Modeling of big faults or weak planes of strong and weak discontinuities is of major importance to assess the Geomechanical behaviour of mining/civil tunnel, reservoirs etc. For modelling fractures in Geomechanics, prior art has been limited to Interface Elements which suffer from numerical instability and where faults are required to be aligned with element edges. In this paper, we consider comparative study on finite elements for capturing strong discontinuities by means of elemental (EFEM)...

  20. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  1. The separatrix response of diverted TCV plasmas compared to the CREATE-L model

    International Nuclear Information System (INIS)

    Vyas, P.; Lister, J.B.; Villone, F.; Albanese, R.

    1997-11-01

    The response of Ohmic, single-null diverted, non-centred plasmas in TCV to poloidal field coil stimulation has been compared to the linear CREATE-L MHD equilibrium response model. The closed loop responses of directly measured quantities, reconstructed parameters, and the reconstructed plasma contour were all examined. Provided that the plasma position and shape perturbation were small enough for the linearity assumption to hold, the model-experiment agreement was good. For some stimulations the open loop vertical position instability growth rate changed significantly, illustrating the limitations of a linear model. A different model was developed with the assumption that the flux at the plasma boundary is frozen and was also compared with experimental results. It proved not to be as reliable as the CREATE-L model for some simulation parameters showing that the experiments were able to discriminate between different plasma response models. The closed loop response was also found to be sensitive to changes in the modelled plasma shape. It was not possible to invalidate the CREATE-L model despite the extensive range of responses excited by the experiments. (author) figs., tabs., 5 refs

  2. A comparative study to identify a suitable model of ownership for Iran football pro league clubs

    Directory of Open Access Journals (Sweden)

    Saeed Amirnejad

    2018-01-01

    Full Text Available Today the government ownership of the professional football clubs is absolutely illogical view point. Most of sports clubs are conducted by private sector using different models of ownership all over the world. In Iran, government credits benefit was main reason that the professional sport was firstly developed by government firms and organizations. Therefore, the sports team ownership is without the professionalization standards. The present comparative study was to examine the different football club ownership structures of the top leagues and the current condition of Iran football pro league ownership and then present a suitable ownership structure of Iran football clubs to leave behind the government club ownership. Among the initial 120 scientific texts, the thirty two cases including papers, books and reports were found relevant to this study. We studied the ownership prominence and several football club models of ownership focused on stock listing model of ownership, private investor model of ownership, supporter trust model of ownership and Japan partnership model of ownership; theoretical concepts, empirical studies, main findings, strengths and weaknesses were covered in analysis procedure. According to various models of ownership in leagues and the models’ productivity in football clubs, each model of ownership considering national environmental, economic, social conditions has strengths and weaknesses. So, we cannot present a definite model of ownership for Iran football pro league clubs due to different micro-environments of Iran clubs. We need a big planning to provide a supporter-investor mixed model of ownership to Iranian clubs. Considering strengths and weaknesses in the models of ownership as well as the micro and macro environment of Iran football clubs, German model and Japan partnership model are offered as suitable ones to probable new model of ownership in Iran pro league clubs. Consequently, more studies are required

  3. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  4. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  5. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  6. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  7. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...

  8. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  9. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    Science.gov (United States)

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  10. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-01-01

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin’s lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (V x (%)), a two-variable NTCP model including V 30 (%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (V x (cc)) was analyzed, an NTCP model based on 3 variables including V 30 (cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760–0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an

  11. MODELLING OF FINANCIAL EFFECTIVENESS AND COMPARATIVE ANALYSIS OF PUBLIC-PRIVATE PARTNERSHIP PROJECTS AND PUBLIC PROCUREMENT

    Directory of Open Access Journals (Sweden)

    Kuznetsov Aleksey Alekseevich

    2017-10-01

    Full Text Available The article substantiates the necessity of extension and development of tools for methodological evaluation of effectiveness of public-private partnership (PPP projects both individually and in comparison of effectiveness of various mechanisms of projects realization on the example of traditional public procurement. The author proposed an original technique of modelling cash flows of private and public partners when realizing the projects based on PPP and on public procurement. The model enables us promptly and with sufficient accuracy to reveal comparative advantages of project forms of PPP and public procurement, and also assess financial effectiveness of the PPP projects for each partner. The modelling is relatively straightforward and reliable. The model also enables us to evaluate public partner's expenses for availability, find the terms and thresholds for interest rates of financing attracted by the partners and for risk probabilities to ensure comparative advantage of PPP project. Proposed criteria of effectiveness are compared with methodological recommendations provided by the Ministry of Economic Development of the Russian Federation. Subject: public and private organizations, financial institutions, development institutions and their theoretical and practical techniques for effectiveness evaluation of public-private partnership (PPP projects. Complexity of effectiveness evaluation and the lack of unified and accepted methodology are among the factors that limit the development of PPP in the Russian Federation nowadays. Research objectives: development of methodological methods for assessing financial efficiency of PPP projects by creating and justifying application of new principles and methods of modelling, and also criteria for effectiveness of PPP projects both individually and in comparison with the public procurement. Materials and methods: open database of ongoing PPP projects in the Russian Federation and abroad was used. The

  12. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  13. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  14. The new ICRP respiratory model for radiation protection (ICRP 66) : applications and comparative evaluations

    International Nuclear Information System (INIS)

    Castellani, C.M.; Luciani, A.

    1996-02-01

    The aim of this report is to present the New ICRP Respiratory Model Tract for Radiological Protection. The model allows considering anatomical and physiological characteristics, giving reference values for children aged 3 months, 1, 5,10, and 15 years for adults; it also takes into account aerosol and gas characteristics. After a general description of the model structure, deposition, clearance and dosimetric models are presented. To compare the new and previous model (ICRP 30), dose coefficients (committed effective dose for unit intake) foe inhalation of radionuclides by workers are calculated considering aerosol granulometries with activity median aerodynamic of 1 and 5 μm, reference values for the respective publications. Dose coefficients and annual limits of intakes concerning respective dose limits (50 and 20 mSv respectively for ICRP 26 and 60) for workers and for members of population in case of dispersion of fission products aerosols, are finally calculated

  15. Using ROC curves to compare neural networks and logistic regression for modeling individual noncatastrophic tree mortality

    Science.gov (United States)

    Susan L. King

    2003-01-01

    The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...

  16. Overview, comparative assessment and recommendations of forecasting models for short-term water demand prediction

    CSIR Research Space (South Africa)

    Anele, AO

    2017-11-01

    Full Text Available -term water demand (STWD) forecasts. In view of this, an overview of forecasting methods for STWD prediction is presented. Based on that, a comparative assessment of the performance of alternative forecasting models from the different methods is studied. Times...

  17. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-01-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performe...

  18. Feeding Behavior of Aplysia: A Model System for Comparing Cellular Mechanisms of Classical and Operant Conditioning

    Science.gov (United States)

    Baxter, Douglas A.; Byrne, John H.

    2006-01-01

    Feeding behavior of Aplysia provides an excellent model system for analyzing and comparing mechanisms underlying appetitive classical conditioning and reward operant conditioning. Behavioral protocols have been developed for both forms of associative learning, both of which increase the occurrence of biting following training. Because the neural…

  19. Writ in water, lines in sand: Ancient trade routes, models and comparative evidence

    Directory of Open Access Journals (Sweden)

    Eivind Heldaas Seland

    2015-12-01

    Full Text Available Historians and archaeologists often take connectivity for granted, and fail to address the problems of documenting patterns of movement. This article highlights the methodological challenges of reconstructing trade routes in prehistory and early history. The argument is made that these challenges are best met through the application of modern models of connectivity, in combination with the conscious use of comparative approaches.

  20. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  1. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  2. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    International Nuclear Information System (INIS)

    Pan, Dongqing; Chien Jen, Tien; Li, Tao; Yuan, Chris

    2014-01-01

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired

  3. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Dongqing; Chien Jen, Tien [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53201 (United States); Li, Tao [School of Mechanical Engineering, Dalian University of Technology, Dalian 116024 (China); Yuan, Chris, E-mail: cyuan@uwm.edu [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, 3200 North Cramer Street, Milwaukee, Wisconsin 53211 (United States)

    2014-01-15

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.

  4. Comparative analysis of the planar capacitor and IDT piezoelectric thin-film micro-actuator models

    International Nuclear Information System (INIS)

    Myers, Oliver J; Anjanappa, M; Freidhoff, Carl B

    2011-01-01

    A comparison of the analysis of similarly developed microactuators is presented. Accurate modeling and simulation techniques are vital for piezoelectrically actuated microactuators. Coupling analytical and numerical modeling techniques with variational design parameters, accurate performance predictions can be realized. Axi-symmetric two-dimensional and three-dimensional static deflection and harmonic models of a planar capacitor actuator are presented. Planar capacitor samples were modeled as unimorph diaphragms with sandwiched piezoelectric material. The harmonic frequencies were calculated numerically and compared well to predicted values and deformations. The finite element modeling reflects the impact of the d 31 piezoelectric constant. Two-dimensional axi-symmetric models of circularly interdigitated piezoelectrically membranes are also presented. The models include the piezoelectric material and properties, the membrane materials and properties, and incorporates various design considerations of the model. These models also include the electro-mechanical coupling for piezoelectric actuation and highlight a novel approach to take advantage of the higher d 33 piezoelectric coupling coefficient. Performance is evaluated for varying parameters such as electrode pitch, electrode width, and piezoelectric material thickness. The models also showed that several of the design parameters were naturally coupled. The static numerical models correlate well with the maximum static deflection of the experimental devices. Finally, this paper deals with the development of numerical harmonic models of piezoelectrically actuated planar capacitor and interdigitated diaphragms. The models were able to closely predict the first two harmonics, conservatively predict the third through sixth harmonics and predict the estimated values of center deflection using plate theory. Harmonic frequency and deflection simulations need further correlation by conducting extensive iterative

  5. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  6. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    Science.gov (United States)

    Haas, Kevin A.; Warner, John C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales.

  7. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  8. A Comparative Assessment of Aerodynamic Models for Buffeting and Flutter of Long-Span Bridges

    Directory of Open Access Journals (Sweden)

    Igor Kavrakov

    2017-12-01

    Full Text Available Wind-induced vibrations commonly represent the leading criterion in the design of long-span bridges. The aerodynamic forces in bridge aerodynamics are mainly based on the quasi-steady and linear unsteady theory. This paper aims to investigate different formulations of self-excited and buffeting forces in the time domain by comparing the dynamic response of a multi-span cable-stayed bridge during the critical erection condition. The bridge is selected to represent a typical reference object with a bluff concrete box girder for large river crossings. The models are viewed from a perspective of model complexity, comparing the influence of the aerodynamic properties implied in the aerodynamic models, such as aerodynamic damping and stiffness, fluid memory in the buffeting and self-excited forces, aerodynamic nonlinearity, and aerodynamic coupling on the bridge response. The selected models are studied for a wind-speed range that is typical for the construction stage for two levels of turbulence intensity. Furthermore, a simplified method for the computation of buffeting forces including the aerodynamic admittance is presented, in which rational approximation is avoided. The critical flutter velocities are also compared for the selected models under laminar flow. Keywords: Buffeting, Flutter, Long-span bridges, Bridge aerodynamics, Bridge aeroelasticity, Erection stage

  9. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  10. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  11. Comparing wall modeled LES and prescribed boundary layer approach in infinite wind farm simulations

    DEFF Research Database (Denmark)

    Sarlak, Hamid; Mikkelsen, Robert; Sørensen, Jens Nørkær

    2015-01-01

    be imposed to study the wake and dynamics of vortices. The methodology is used for simulation of interactions of an infinitely long wind farm with the neutral ABL. Flow statistics are compared with the WMLES computations in terms of mean velocity as well as higher order statistical moments. The results......This paper aims at presenting a simple and computationally fast method for simulation of the Atmospheric Boundary Layer (ABL) and comparing the results with the commonly used wall-modelled Large Eddy Simulation (WMLES). The simple method, called Prescribed Mean Shear and Turbulence (PMST) hereafter......, is based on imposing body forces over the whole domain to maintain a desired unsteady ow, where the ground is modeled as a slip-free boundary which in return hampers the need for grid refinement and/or wall modeling close to the solid walls. Another strength of this method besides being computationally...

  12. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  13. Comparative analysis of Bouc–Wen and Jiles–Atherton models under symmetric excitations

    Energy Technology Data Exchange (ETDEWEB)

    Laudani, Antonino, E-mail: alaudani@uniroma3.it; Fulginei, Francesco Riganti; Salvini, Alessandro

    2014-02-15

    The aim of the present paper is to validate the Bouc–Wen (BW) hysteresis model when it is applied to predict dynamic ferromagnetic loops. Indeed, although the Bouc–Wen model has had an increasing interest in last few years, it is usually adopted in mechanical and structural systems and very rarely for magnetic applications. Thus, for addressing this goal the Bouc–Wen model is compared with the dynamic Jiles–Atherton model that, instead, was ideated exactly for simulating magnetic hysteresis. The comparative analysis has involved saturated and symmetric hysteresis loops in ferromagnetic materials. In addition in order to identify the Bouc–Wen parameters a very effective recent heuristic, called Metric-Topological and Evolutionary Optimization (MeTEO) has been utilized. It is based on a hybridization of three meta-heuristics: the Flock-of-Starlings Optimization, the Particle Swarm Optimization and the Bacterial Chemotaxis Algorithm. Thanks to the specific properties of these heuristic, MeTEO allow us to achieve effective identification of such kind of models. Several hysteresis loops have been utilized for final validation tests with the aim to investigate if the BW model can follow the different hysteresis behaviors of both static (quasi-static) and dynamic cases.

  14. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  15. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  16. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    Science.gov (United States)

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-10-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated health services costs, the average annual cost during 1983-85 for team cases was 13.6 percent less than that of individual model cases. While the team cases were 18.3 percent less expensive among "old" patients (patients who entered the study from the existing ACCESS caseload), they were only 2.7 percent less costly among "new" cases. The lower costs were due to reductions in hospital days and home care. Team cases averaged 26 percent fewer hospital days per year and 17 percent fewer home health aide hours. Nursing home use was 48 percent higher for the team group than for the individual model group. Mortality was almost exactly the same for both groups during the first year (about 30 percent), but was lower for team patients during the second year (11 percent as compared to 16 percent). Probable mechanisms for the observed results are discussed.

  17. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    Science.gov (United States)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark

  18. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    Science.gov (United States)

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  19. Comparing flow-through and static ice cave models for Shoshone Ice Cave

    Directory of Open Access Journals (Sweden)

    Kaj E. Williams

    2015-05-01

    Full Text Available In this paper we suggest a new ice cave type: the “flow-through” ice cave. In a flow-through ice cave external winds blow into the cave and wet cave walls chill the incoming air to the wet-bulb temperature, thereby achieving extra cooling of the cave air. We have investigated an ice cave in Idaho, located in a lava tube that is reported to have airflow through porous wet end-walls and could therefore be a flow-through cave. We have instrumented the site and collected data for one year. In order to determine the actual ice cave type present at Shoshone, we have constructed numerical models for static and flow-through caves (dynamic is not relevant here. The models are driven with exterior measurements of air temperature, relative humidity and wind speed. The model output is interior air temperature and relative humidity. We then compare the output of both models to the measured interior air temperatures and relative humidity. While both the flow-through and static cave models are capable of preserving ice year-round (a net zero or positive ice mass balance, both models show very different cave air temperature and relative humidity output. We find the empirical data support a hybrid model of the static and flow-through models: permitting a static ice cave to have incoming air chilled to the wet-bulb temperature fits the data best for the Shoshone Ice Cave.

  20. Comparative study of chemo-electro-mechanical transport models for an electrically stimulated hydrogel

    International Nuclear Information System (INIS)

    Elshaer, S E; Moussa, W A

    2014-01-01

    The main objective of this work is to introduce a new expression for the hydrogel’s hydration for use within the Poisson Nernst–Planck chemo electro mechanical (PNP CEM) transport models. This new contribution to the models support large deformation by considering the higher order terms in the Green–Lagrangian strain tensor. A detailed discussion of the CEM transport models using Poisson Nernst–Planck (PNP) and Poisson logarithmic Nernst–Planck (PLNP) equations for chemically and electrically stimulated hydrogels will be presented. The assumptions made to simplify both CEM transport models for electric field application in the order of 0.833 kV m −1 and a highly diluted electrolyte solution (97% is water) will be explained. This PNP CEM model has been verified accurately against experimental and numerical results. In addition, different definitions for normalizing the parameters are used to derive the dimensionless forms of both the PNP and PLNP CEM. Four models, PNP CEM, PLNP CEM, dimensionless PNP CEM and dimensionless PNLP CEM transport models were employed on an axially symmetric cylindrical hydrogel problem with an aspect ratio (diameter to thickness) of 175:3. The displacement and osmotic pressure obtained for the four models are compared against the variation of the number of elements for finite element analysis, simulation duration and solution rate when using the direct numerical solver. (papers)

  1. Comparability of results from pair and classical model formulations for different sexually transmitted infections.

    Directory of Open Access Journals (Sweden)

    Jimmy Boon Som Ong

    Full Text Available The "classical model" for sexually transmitted infections treats partnerships as instantaneous events summarized by partner change rates, while individual-based and pair models explicitly account for time within partnerships and gaps between partnerships. We compared predictions from the classical and pair models over a range of partnership and gap combinations. While the former predicted similar or marginally higher prevalence at the shortest partnership lengths, the latter predicted self-sustaining transmission for gonorrhoea (GC and Chlamydia (CT over much broader partnership and gap combinations. Predictions on the critical level of condom use (C(c required to prevent transmission also differed substantially when using the same parameters. When calibrated to give the same disease prevalence as the pair model by adjusting the infectious duration for GC and CT, and by adjusting transmission probabilities for HIV, the classical model then predicted much higher C(c values for GC and CT, while C(c predictions for HIV were fairly close. In conclusion, the two approaches give different predictions over potentially important combinations of partnership and gap lengths. Assuming that it is more correct to explicitly model partnerships and gaps, then pair or individual-based models may be needed for GC and CT since model calibration does not resolve the differences.

  2. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2014-02-01

    Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.

  3. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  4. Comparing ESC and iPSC—Based Models for Human Genetic Disorders

    Directory of Open Access Journals (Sweden)

    Tomer Halevy

    2014-10-01

    Full Text Available Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs from patients’ somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn’t be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  5. Comparing ESC and iPSC-Based Models for Human Genetic Disorders.

    Science.gov (United States)

    Halevy, Tomer; Urbach, Achia

    2014-10-24

    Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs) from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs) from patients' somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn't be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  6. THE PROPAGATION OF UNCERTAINTIES IN STELLAR POPULATION SYNTHESIS MODELING. II. THE CHALLENGE OF COMPARING GALAXY EVOLUTION MODELS TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Conroy, Charlie; Gunn, James E.; White, Martin

    2010-01-01

    Models for the formation and evolution of galaxies readily predict physical properties such as star formation rates, metal-enrichment histories, and, increasingly, gas and dust content of synthetic galaxies. Such predictions are frequently compared to the spectral energy distributions of observed galaxies via the stellar population synthesis (SPS) technique. Substantial uncertainties in SPS exist, and yet their relevance to the task of comparing galaxy evolution models to observations has received little attention. In the present work, we begin to address this issue by investigating the importance of uncertainties in stellar evolution, the initial stellar mass function (IMF), and dust and interstellar medium (ISM) properties on the translation from models to observations. We demonstrate that these uncertainties translate into substantial uncertainties in the ultraviolet, optical, and near-infrared colors of synthetic galaxies. Aspects that carry significant uncertainties include the logarithmic slope of the IMF above 1 M sun , dust attenuation law, molecular cloud disruption timescale, clumpiness of the ISM, fraction of unobscured starlight, and treatment of advanced stages of stellar evolution including blue stragglers, the horizontal branch, and the thermally pulsating asymptotic giant branch. The interpretation of the resulting uncertainties in the derived colors is highly non-trivial because many of the uncertainties are likely systematic, and possibly correlated with the physical properties of galaxies. We therefore urge caution when comparing models to observations.

  7. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  8. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  9. Comparative study of wall-force models for the simulation of bubbly flows

    Energy Technology Data Exchange (ETDEWEB)

    Rzehak, Roland, E-mail: r.rzehak@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Krepper, Eckhard, E-mail: E.Krepper@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Lifante, Conxita, E-mail: Conxita.Lifante@ansys.com [ANSYS Germany GmbH, Staudenfeldweg 12, 83624 Otterfing (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Comparison of common models for the wall force with an experimental database. Black-Right-Pointing-Pointer Identification of suitable closure for bubbly flow. Black-Right-Pointing-Pointer Enables prediction of location and height of wall peak in void fraction profiles. - Abstract: Accurate numerical prediction of void-fraction profiles in bubbly multiphase-flow relies on suitable closure models for the momentum exchange between liquid and gas phases. We here consider forces acting on the bubbles in the vicinity of a wall. A number of different models for this so-called wall-force have been proposed in the literature and are implemented in widely used CFD-codes. Simulations using a selection of these models are compared with a set of experimental data on bubbly air-water flow in round pipes of different diameter. Based on the results, recommendations on suitable closures are given.

  10. Prediction of paddy drying kinetics: A comparative study between mathematical and artificial neural network modelling

    Directory of Open Access Journals (Sweden)

    Beigi Mohsen

    2017-01-01

    Full Text Available The present study aimed at investigation of deep bed drying of rough rice kernels at various thin layers at different drying air temperatures and flow rates. A comparative study was performed between mathematical thin layer models and artificial neural networks to estimate the drying curves of rough rice. The suitability of nine mathematical models in simulating the drying kinetics was examined and the Midilli model was determined as the best approach for describing drying curves. Different feed forward-back propagation artificial neural networks were examined to predict the moisture content variations of the grains. The ANN with 4-18-18-1 topology, transfer function of hyperbolic tangent sigmoid and a Levenberg-Marquardt back propagation training algorithm provided the best results with the maximum correlation coefficient and the minimum mean square error values. Furthermore, it was revealed that ANN modeling had better performance in prediction of drying curves with lower root mean square error values.

  11. Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data

    Science.gov (United States)

    Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.

    2007-01-01

    The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.

  12. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  13. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods

    Science.gov (United States)

    Stergioulas, Nikolaos; Friedman, John L.

    1995-05-01

    We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser, & Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good, and error bars in the published numbers for maximum frequencies based on BI are dominated not by the code inaccuracy but by the number of models used to approximate a continuous sequence of stars. The BI code is faster per iteration, and it converges more rapidly at low density, while KEH converges more rapidly at high density; KEH also converges in regions where BI does not, allowing one to compute some models unstable against collapse that are inaccessible to the BI code. A relatively large discrepancy recently reported (Eriguchi et al. 1994) for models based on Friedman-Pandharipande equation of state is found to arise from the use of two different versions of the equation of state. For two representative equations of state, the two-dimensional space of equilibrium configurations is displayed as a surface in a three-dimensional space of angular momentum, mass, and central density. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) either all unstable to collapse or are all stable. In the first case, the stable model with maximum angular velocity is also the model with maximum mass, baryon mass, and angular momentum. In the second case, the stable models with maximum values of these quantities are all distinct. Our implementation of the KEH method will be available as a public domain program for interested users.

  14. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    Science.gov (United States)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  15. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  16. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    Directory of Open Access Journals (Sweden)

    Sorapak Pukdesree

    2017-11-01

    Full Text Available The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two groups including an experimental group with 28 students using SDLC model with collaborative learning and a control group with 10 students using typical SDLC model. The research instruments were attitude questionnaire, semi-structured interview and self-assessment questionnaire. The collected data was analysed by arithmetic mean, standard deviation, and independent sample t-test. The results of the questionnaire revealed that the attitudes of the learners using collaborative learning and SDLC model were statistically significant difference between the mean score for experimental group and control group at a significance level of 0.05. The independent statistical analyses were significantly different between the two groups at a significance level of 0.05. The results of the interviewing revealed that most of the learners had the corresponding opinions that collaborative learning was very useful with highest level of their attitudes comparing with the previous methodology. Learners had left some feedbacks that collaborative learning should be applied to other courses.

  17. Alfven waves in the auroral ionosphere: A numerical model compared with measurements

    International Nuclear Information System (INIS)

    Knudsen, D.J.; Kelley, M.C.; Vickrey, J.F.

    1992-01-01

    The authors solve a linear numerical model of Alfven waves reflecting from the high-latitude ionosphere, both to better understanding the role of the ionosphere in the magnetosphere/ionosphere coupling process and to compare model results with in situ measurements. They use the model to compute the frequency-dependent amplitude and phase relations between the meridional electric and the zonal magnetic fields due to Alfven waves. These relations are compared with measurements taken by an auroral sounding rocket flow in the morningside oval and by the HILAT satellite traversing the oval at local noon. The sounding rocket's trajectory was mostly parallel to the auroral oval, and is measured enhanced fluctuating field energy in regions of electron precipitation. The rocket-measured phase data are in excellent agreement with the Alfven wave model, and the relation between the modeled and the measured by HILAT are related by the height-integrated Pedersen conductivity Σ p , indicating that the measured field fluctuations were due mainly to structured field-aligned current systems. A reason for the relative lack of Alfven wave energy in the HILAT measurements could be the fact that the satellite traveled mostly perpendicular to the oval and therefore quickly traversed narrow regions of electron precipitation and associated wave activity

  18. A Comparative Study of CFD Models of a Real Wind Turbine in Solar Chimney Power Plants

    Directory of Open Access Journals (Sweden)

    Ehsan Gholamalizadeh

    2017-10-01

    Full Text Available A solar chimney power plant consists of four main parts, a solar collector, a chimney, an energy storage layer, and a wind turbine. So far, several investigations on the performance of the solar chimney power plant have been conducted. Among them, different approaches have been applied to model the turbine inside the system. In particular, a real wind turbine coupled to the system was simulated using computational fluid dynamics (CFD in three investigations. Gholamalizadeh et al. simulated a wind turbine with the same blade profile as the Manzanares SCPP’s turbine (FX W-151-A blade profile, while a CLARK Y blade profile was modelled by Guo et al. and Ming et al. In this study, simulations of the Manzanares prototype were carried out using the CFD model developed by Gholamalizadeh et al. Then, results obtained by modelling different turbine blade profiles at different turbine rotational speeds were compared. The results showed that a turbine with the CLARK Y blade profile significantly overestimates the value of the pressure drop across the Manzanares prototype turbine as compared to the FX W-151-A blade profile. In addition, modelling of both blade profiles led to very similar trends in changes in turbine efficiency and power output with respect to rotational speed.

  19. Psychobiological model of temperament and character: Validation and cross-cultural comparations

    Directory of Open Access Journals (Sweden)

    Džamonja-Ignjatović Tamara

    2005-01-01

    Full Text Available The paper presents research results regarding Psychobiological model of personality by Robert Cloninger. The primary research goal was to test the new TCI-5 inventory and compare our results with US normative data. We also analyzed the factor structure of the model and the reliability of basic TCI-5 scales and sub-scales. The sample consisted of 473 subjects from the normal population, age range between 18-50 years. Results showed significant differences between Serbian and American samples. Compared to the American sample, Novelty seeking was higher in the Serbian sample, while Persistence Self-directedness and Cooperativeness were lower. For the most part results of the present study confirmed a seven factor structure model although some sub-scales did not coincide with basic dimensions as predicted by the theoretical model. Therefore certain theoretical revisions of the model are required in order to fit in the empirical findings. Similarly, the discrepancy between the theoretical and empirical was also noticed regarding the reliability of TCI-5 scales. They also need to be re-examined. Thus the results of the study showed satisfactory reliability of Persistence (.90, Self-directedness (.89 and Harm avoidance (.87, but low reliability of the Novelty seeking (.78, Reward dependence (.79 and Self-transcendence (.78.

  20. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  1. Comparative analysis between Hec-RAS models and IBER in the hydraulic assessment of bridges

    OpenAIRE

    Rincón, Jean; Pérez, María; Delfín, Guillermo; Freitez, Carlos; Martínez, Fabiana

    2017-01-01

    This work aims to perform a comparative analysis between the Hec-RAS and IBER models, in the hydraulic evaluation of rivers with structures such as bridges. The case of application was the La Guardia creek, located in the road that communicates the cities of Barquisimeto-Quíbor, Venezuela. The first phase of the study consisted in the comparison of the models from the conceptual point of view and the management of both. The second phase focused on the case study, and the comparison of ...

  2. Cold Nuclear Matter effects on J/psi production at RHIC: comparing shadowing models

    Energy Technology Data Exchange (ETDEWEB)

    Ferreiro, E.G.; /Santiago de Compostela U.; Fleuret, F.; /Ecole Polytechnique; Lansberg, J.P.; /SLAC; Rakotozafindrabe, A.; /SPhN, DAPNIA, Saclay

    2009-06-19

    We present a wide study on the comparison of different shadowing models and their influence on J/{psi} production. We have taken into account the possibility of different partonic processes for the c{bar c}-pair production. We notice that the effect of shadowing corrections on J/{psi} production clearly depends on the partonic process considered. Our results are compared to the available data on dAu collisions at RHIC energies. We try different break up cross section for each of the studied shadowing models.

  3. Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations

    Science.gov (United States)

    Tselioudis, G.; Bauer, M.; Rossow, W.

    2009-04-01

    Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.

  4. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    Science.gov (United States)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  5. A simplified MHD model of capillary Z-Pinch compared with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Shapolov, A.A.; Kiss, M.; Kukhlevsky, S.V. [Institute of Physics, University of Pecs (Hungary)

    2016-11-15

    The most accurate models of the capillary Z-pinches used for excitation of soft X-ray lasers and photolithography XUV sources currently are based on the magnetohydrodynamics theory (MHD). The output of MHD-based models greatly depends on details in the mathematical description, such as initial and boundary conditions, approximations of plasma parameters, etc. Small experimental groups who develop soft X-ray/XUV sources often use the simplest Z-pinch models for analysis of their experimental results, despite of these models are inconsistent with the MHD equations. In the present study, keeping only the essential terms in the MHD equations, we obtained a simplified MHD model of cylindrically symmetric capillary Z-pinch. The model gives accurate results compared to experiments with argon plasmas, and provides simple analysis of temporal evolution of main plasma parameters. The results clarify the influence of viscosity, heat flux and approximations of plasma conductivity on the dynamics of capillary Z-pinch plasmas. The model can be useful for researchers, especially experimentalists, who develop the soft X-ray/XUV sources. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  6. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  7. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  8. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  9. A comparative study of the tail ion distribution with reduced Fokker-Planck models

    Science.gov (United States)

    McDevitt, C. J.; Tang, Xian-Zhu; Guo, Zehua; Berk, H. L.

    2014-03-01

    A series of reduced models are used to study the fast ion tail in the vicinity of a transition layer between plasmas at disparate temperatures and densities, which is typical of the gas and pusher interface in inertial confinement fusion targets. Emphasis is placed on utilizing progressively more comprehensive models in order to identify the essential physics for computing the fast ion tail at energies comparable to the Gamow peak. The resulting fast ion tail distribution is subsequently used to compute the fusion reactivity as a function of collisionality and temperature. While a significant reduction of the fusion reactivity in the hot spot compared to the nominal Maxwellian case is present, this reduction is found to be partially recovered by an increase of the fusion reactivity in the neighboring cold region.

  10. Comparing different CFD wind turbine modelling approaches with wind tunnel measurements

    International Nuclear Information System (INIS)

    Kalvig, Siri; Hjertager, Bjørn; Manger, Eirik

    2014-01-01

    The performance of a model wind turbine is simulated with three different CFD methods: actuator disk, actuator line and a fully resolved rotor. The simulations are compared with each other and with measurements from a wind tunnel experiment. The actuator disk is the least accurate and most cost-efficient, and the fully resolved rotor is the most accurate and least cost-efficient. The actuator line method is believed to lie in between the two ends of the scale. The fully resolved rotor produces superior wake velocity results compared to the actuator models. On average it also produces better results for the force predictions, although the actuator line method had a slightly better match for the design tip speed. The open source CFD tool box, OpenFOAM, was used for the actuator disk and actuator line calculations, whereas the market leading commercial CFD code, ANSYS/FLUENT, was used for the fully resolved rotor approach

  11. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  12. Using the Landlab toolkit to evaluate and compare alternative geomorphic and hydrologic model formulations

    Science.gov (United States)

    Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.

    2016-12-01

    Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision

  13. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  14. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  15. Comparative Analysis of Photogrammetric Methods for 3D Models for Museums

    DEFF Research Database (Denmark)

    Hafstað Ármannsdottir, Unnur Erla; Antón Castro, Francesc/François; Mioc, Darka

    2014-01-01

    The goal of this paper is to make a comparative analysis and selection of methodologies for making 3D models of historical items, buildings and cultural heritage and how to preserve information such as temporary exhibitions and archaeological findings. Two of the methodologies analyzed correspond...... matrix has been used. Prototypes are made partly or fully and evaluated from the point of view of preservation of information by a museum....

  16. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  17. Thermodynamic Molecular Switch in Sequence-Specific Hydrophobic Interaction: Two Computational Models Compared

    Directory of Open Access Journals (Sweden)

    Paul Chun

    2003-01-01

    Full Text Available We have shown in our published work the existence of a thermodynamic switch in biological systems wherein a change of sign in ΔCp°(Treaction leads to a true negative minimum in the Gibbs free energy change of reaction, and hence, a maximum in the related Keq. We have examined 35 pair-wise, sequence-specific hydrophobic interactions over the temperature range of 273–333 K, based on data reported by Nemethy and Scheraga in 1962. A closer look at a single example, the pair-wise hydrophobic interaction of leucine-isoleucine, will demonstrate the significant differences when the data are analyzed using the Nemethy-Scheraga model or treated by the Planck-Benzinger methodology which we have developed. The change in inherent chemical bond energy at 0 K, ΔH°(T0 is 7.53 kcal mol-1 compared with 2.4 kcal mol-1, while ‹ts› is 365 K as compared with 355 K, for the Nemethy-Scheraga and Planck-Benzinger model, respectively. At ‹tm›, the thermal agitation energy is about five times greater than ΔH°(T0 in the Planck-Benzinger model, that is 465 K compared to 497 K in the Nemethy-Scheraga model. The results imply that the negative Gibbs free energy minimum at a well-defined ‹ts›, where TΔS° = 0 at about 355 K, has its origin in the sequence-specific hydrophobic interactions, which are highly dependent on details of molecular structure. The Nemethy-Scheraga model shows no evidence of the thermodynamic molecular switch that we have found to be a universal feature of biological interactions. The Planck-Benzinger method is the best known for evaluating the innate temperature-invariant enthalpy, ΔH°(T0, and provides for better understanding of the heat of reaction for biological molecules.

  18. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  19. A Comparative Study of Two Decision Models: Frisch’s model and a simple Dutch planning model

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1951-01-01

    textabstractThe significance of Frisch's notion of decision models is, in the first place, that they draw full attention upon "inverted problems" which economic policy puts before us. In these problems the data are no longer those in the traditional economic problems, but partly the political

  20. A Field Guide to Extra-Tropical Cyclones: Comparing Models to Observations

    Science.gov (United States)

    Bauer, M.

    2008-12-01

    Climate it is said is the accumulation of weather. And weather is not the concern of climate models. Justification for this latter sentiment has long hidden behind coarse model resolutions and blunt validation tools based on climatological maps and the like. The spatial-temporal resolutions of today's models and observations are converging onto meteorological scales however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough, or at least lacks perverting biases, such that its accumulation does in fact result in a robust climate prediction. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from climate model output. These include the usual cyclone distribution statistics (maps, histograms), but also adaptive cyclone- centric composites. We have also created a complementary dataset, The MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid- latitude cyclones based on Reanalysis products. Using this we then extract complimentary composites from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools will be shown. dime.giss.nasa.gov/mcms/mcms.html

  1. Comparing GIS-based habitat models for applications in EIA and SEA

    International Nuclear Information System (INIS)

    Gontier, Mikael; Moertberg, Ulla; Balfors, Berit

    2010-01-01

    Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The models were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.

  2. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  3. Comparing the reported burn conditions for different severity burns in porcine models: a systematic review.

    Science.gov (United States)

    Andrews, Christine J; Cuttle, Leila

    2017-12-01

    There are many porcine burn models that create burns using different materials (e.g. metal, water) and different burn conditions (e.g. temperature and duration of exposure). This review aims to determine whether a pooled analysis of these studies can provide insight into the burn materials and conditions required to create burns of a specific severity. A systematic review of 42 porcine burn studies describing the depth of burn injury with histological evaluation is presented. Inclusion criteria included thermal burns, burns created with a novel method or material, histological evaluation within 7 days post-burn and method for depth of injury assessment specified. Conditions causing deep dermal scald burns compared to contact burns of equivalent severity were disparate, with lower temperatures and shorter durations reported for scald burns (83°C for 14 seconds) compared to contact burns (111°C for 23 seconds). A valuable archive of the different mechanisms and materials used for porcine burn models is presented to aid design and optimisation of future models. Significantly, this review demonstrates the effect of the mechanism of injury on burn severity and that caution is recommended when burn conditions established by porcine contact burn models are used by regulators to guide scald burn prevention strategies. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  4. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  5. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  6. State regulation of nuclear sector: comparative study of Argentina and Brazil models

    International Nuclear Information System (INIS)

    Monteiro Filho, Joselio Silveira

    2004-08-01

    This research presents a comparative assessment of the regulation models of the nuclear sector in Argentina - under the responsibility of the Autoridad Regulatoria Nuclear (ARN), and Brazil - under the responsibility of Comissao Nacional de Energia Nuclear (CNEN), trying to identify which model is more adequate aiming the safe use of nuclear energy. Due to the methodology adopted, the theoretical framework resulted in criteria of analysis that corresponds to the characteristics of the Brazilian regulatory agencies created for other economic sector during the State reform staring in the middle of the nineties. Later, these criteria of analysis were used as comparison patterns between the regulation models of the nuclear sectors of Argentina and Brazil. The comparative assessment showed that the regulatory structure of the nuclear sector in Argentina seems to be more adequate, concerning the safe use of nuclear energy, than the model adopted in Brazil by CNEN, because its incorporates the criteria of functional, institutional and financial independence, competence definitions, technical excellence and transparency, indispensable to the development of its functions with autonomy, ethics, exemption and agility. (author)

  7. Is tuberculosis treatment really free in China? A study comparing two areas with different management models.

    Directory of Open Access Journals (Sweden)

    Sangsang Qiu

    Full Text Available China has implemented a free-service policy for tuberculosis. However, patients still have to pay a substantial proportion of their annual income for treatment of this disease. This study describes the economic burden on patients with tuberculosis; identifies related factors by comparing two areas with different management models; and provides policy recommendation for tuberculosis control reform in China.There are three tuberculosis management models in China: the tuberculosis dispensary model, specialist model and integrated model. We selected Zhangjiagang (ZJG and Taixing (TX as the study sites, which correspond to areas implementing the integrated model and dispensary model, respectively. Patients diagnosed and treated for tuberculosis since January 2010 were recruited as study subjects. A total of 590 patients (316 patients from ZJG and 274 patients from TX were interviewed with a response rate of 81%. The economic burden attributed to tuberculosis, including direct costs and indirect costs, was estimated and compared between the two study sites. The Mann-Whitney U Test was used to compare the cost differences between the two groups. Potential factors related to the total out-of-pocket costs were analyzed based on a step-by-step multivariate linear regression model after the logarithmic transformation of the costs.The average (median, interquartile range total cost was 18793.33 (9965, 3200-24400 CNY for patients in ZJG, which was significantly higher than for patients in TX (mean: 6598.33, median: 2263, interquartile range: 983-6688 (Z = 10.42, P < 0.001. After excluding expenses covered by health insurance, the average out-of-pocket costs were 14304.4 CNY in ZJG and 5639.2 CNY in TX. Based on the multivariable linear regression analysis, factors related to the total out-of-pocket costs were study site, age, number of clinical visits, residence, diagnosis delay, hospitalization, intake of liver protective drugs and use of the second

  8. Comparative study: TQ and Lean Production ownership models in health services.

    Science.gov (United States)

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. this is a qualitative research that was conducted through a descriptive case study. through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  9. Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)

    2012-07-01

    An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)

  10. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  11. The Evolution of the Solar Magnetic Field: A Comparative Analysis of Two Models

    Science.gov (United States)

    McMichael, K. D.; Karak, B. B.; Upton, L.; Miesch, M. S.; Vierkens, O.

    2017-12-01

    Understanding the complexity of the solar magnetic cycle is a task that has plagued scientists for decades. However, with the help of computer simulations, we have begun to gain more insight into possible solutions to the plethora of questions inside the Sun. STABLE (Surface Transport and Babcock Leighton) is a newly developed 3D dynamo model that can reproduce features of the solar cycle. In this model, the tilted bipolar sunspots are formed on the surface (based on the toroidal field at the bottom of the convection zone) and then decay and disperse, producing the poloidal field. Since STABLE is a 3D model, it is able to solve the full induction equation in the entirety of the solar convection zone as well as incorporate many free parameters (such as spot depth and turbulent diffusion) which are difficult to observe. In an attempt to constrain some of these free parameters, we compare STABLE to a surface flux transport model called AFT (Advective Flux Transport) which solves the radial component of the magnetic field on the solar surface. AFT is a state-of-the-art surface flux transport model that has a proven record of being able to reproduce solar observations with great accuracy. In this project, we implement synthetic bipolar sunspots into both models, using identical surface parameters, and run the models for comparison. We demonstrate that the 3D structure of the sunspots in the interior and the vertical diffusion of the sunspot magnetic field play an important role in establishing the surface magnetic field in STABLE. We found that when a sufficient amount of downward magnetic pumping is included in STABLE, the surface magnetic field from this model becomes insensitive to the internal structure of the sunspot and more consistent with that of AFT.

  12. Representing macropore flow at the catchment scale: a comparative modeling study

    Science.gov (United States)

    Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.

    2017-12-01

    Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.

  13. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  14. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  15. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  16. Comparative Study of Injury Models for Studying Muscle Regeneration in Mice.

    Directory of Open Access Journals (Sweden)

    David Hardy

    Full Text Available A longstanding goal in regenerative medicine is to reconstitute functional tissues or organs after injury or disease. Attention has focused on the identification and relative contribution of tissue specific stem cells to the regeneration process. Relatively little is known about how the physiological process is regulated by other tissue constituents. Numerous injury models are used to investigate tissue regeneration, however, these models are often poorly understood. Specifically, for skeletal muscle regeneration several models are reported in the literature, yet the relative impact on muscle physiology and the distinct cells types have not been extensively characterised.We have used transgenic Tg:Pax7nGFP and Flk1GFP/+ mouse models to respectively count the number of muscle stem (satellite cells (SC and number/shape of vessels by confocal microscopy. We performed histological and immunostainings to assess the differences in the key regeneration steps. Infiltration of immune cells, chemokines and cytokines production was assessed in vivo by Luminex®.We compared the 4 most commonly used injury models i.e. freeze injury (FI, barium chloride (BaCl2, notexin (NTX and cardiotoxin (CTX. The FI was the most damaging. In this model, up to 96% of the SCs are destroyed with their surrounding environment (basal lamina and vasculature leaving a "dead zone" devoid of viable cells. The regeneration process itself is fulfilled in all 4 models with virtually no fibrosis 28 days post-injury, except in the FI model. Inflammatory cells return to basal levels in the CTX, BaCl2 but still significantly high 1-month post-injury in the FI and NTX models. Interestingly the number of SC returned to normal only in the FI, 1-month post-injury, with SCs that are still cycling up to 3-months after the induction of the injury in the other models.Our studies show that the nature of the injury model should be chosen carefully depending on the experimental design and desired

  17. Comparing the Goodness of Different Statistical Criteria for Evaluating the Soil Water Infiltration Models

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2016-02-01

    Full Text Available Introduction: The infiltration process is one of the most important components of the hydrologic cycle. Quantifying the infiltration water into soil is of great importance in watershed management. Prediction of flooding, erosion and pollutant transport all depends on the rate of runoff which is directly affected by the rate of infiltration. Quantification of infiltration water into soil is also necessary to determine the availability of water for crop growth and to estimate the amount of additional water needed for irrigation. Thus, an accurate model is required to estimate infiltration of water into soil. The ability of physical and empirical models in simulation of soil processes is commonly measured through comparisons of simulated and observed values. For these reasons, a large variety of indices have been proposed and used over the years in comparison of infiltration water into soil models. Among the proposed indices, some are absolute criteria such as the widely used root mean square error (RMSE, while others are relative criteria (i.e. normalized such as the Nash and Sutcliffe (1970 efficiency criterion (NSE. Selecting and using appropriate statistical criteria to evaluate and interpretation of the results for infiltration water into soil models is essential because each of the used criteria focus on specific types of errors. Also, descriptions of various goodness of fit indices or indicators including their advantages and shortcomings, and rigorous discussions on the suitability of each index are very important. The objective of this study is to compare the goodness of different statistical criteria to evaluate infiltration of water into soil models. Comparison techniques were considered to define the best models: coefficient of determination (R2, root mean square error (RMSE, efficiency criteria (NSEI and modified forms (such as NSEjI, NSESQRTI, NSElnI and NSEiI. Comparatively little work has been carried out on the meaning and

  18. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  19. Comparing risk of failure models in water supply networks using ROC curves

    International Nuclear Information System (INIS)

    Debon, A.; Carrion, A.; Cabrera, E.; Solano, H.

    2010-01-01

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  20. Comparing personality disorder models: cross-method assessment of the FFM and DSM-IV-TR.

    Science.gov (United States)

    Samuel, Douglas B; Widiger, Thomas W

    2010-12-01

    The current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR; American Psychiatric Association, 2000) defines personality disorders as categorical entities that are distinct from each other and from normal personality traits. However, many scientists now believe that personality disorders are best conceptualized using a dimensional model of traits that span normal and abnormal personality, such as the Five-Factor Model (FFM). However, if the FFM or any dimensional model is to be considered as a credible alternative to the current model, it must first demonstrate an increment in the validity of the assessment offered within a clinical setting. Thus, the current study extended previous research by comparing the convergent and discriminant validity of the current DSM-IV-TR model to the FFM across four assessment methodologies. Eighty-eight individuals receiving ongoing psychotherapy were assessed for the FFM and the DSM-IV-TR personality disorders using self-report, informant report, structured interview, and therapist ratings. The results indicated that the FFM had an appreciable advantage over the DSM-IV-TR in terms of discriminant validity and, at the domain level, convergent validity. Implications of the findings and directions for future research are discussed.

  1. Comparing risk of failure models in water supply networks using ROC curves

    Energy Technology Data Exchange (ETDEWEB)

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  2. Comparing two non-equilibrium approaches to modelling of a free-burning arc

    International Nuclear Information System (INIS)

    Baeva, M; Uhrlandt, D; Benilov, M S; Cunha, M D

    2013-01-01

    Two models of high-pressure arc discharges are compared with each other and with experimental data for an atmospheric-pressure free-burning arc in argon for arc currents of 20–200 A. The models account for space-charge effects and thermal and ionization non-equilibrium in somewhat different ways. One model considers space-charge effects, thermal and ionization non-equilibrium in the near-cathode region and thermal non-equilibrium in the bulk plasma. The other model considers thermal and ionization non-equilibrium in the entire arc plasma and space-charge effects in the near-cathode region. Both models are capable of predicting the arc voltage in fair agreement with experimental data. Differences are observed in the arc attachment to the cathode, which do not strongly affect the near-cathode voltage drop and the total arc voltage for arc currents exceeding 75 A. For lower arc currents the difference is significant but the arc column structure is quite similar and the predicted bulk plasma characteristics are relatively close to each other. (paper)

  3. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  4. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  5. In vitro radiosensitivity of six human cell lines. A comparative study with different statistical models

    International Nuclear Information System (INIS)

    Fertil, B.; Deschavanne, P.J.; Lachet, B.; Malaise, E.P.

    1980-01-01

    The intrinsic radiosensitivity of human cell lines (five tumor and one nontransformed fibroblastic) was studied in vitro. The survival curves were fitted by the single-hit multitarget, the two-hit multitarget, the single-hit multitarget with initial slope, and the quadratic models. The accuracy of the experimental results permitted evaluation of the various fittings. Both a statistical test (comparison of variances left unexplained by the four models) and a biological consideration (check for independence of the fitted parameters vis-a-vis the portion of the survival curve in question) were carried out. The quadratic model came out best with each of them. It described the low-dose effects satisfactorily, revealing a single-hit lethal component. This finding and the fact that the six survival curves displayed a continuous curvature ruled out the adoption of the target models as well as the widely used linear regression. As calculated by the quadratic model, the parameters of the six cell lines lead to the following conclusions: (a) the intrinsic radiosensitivity varies greatly among the different cell lines; (b) the interpretation of the fibroblast survival curve is not basically different from that of the tumor cell lines; and (c) the radiosensitivity of these human cell lines is comparable to that of other mammalian cell lines

  6. A statin a day keeps the doctor away: comparative proverb assessment modelling study

    Science.gov (United States)

    Mizdrak, Anja; Scarborough, Peter

    2013-01-01

    Objective To model the effect on UK vascular mortality of all adults over 50 years old being prescribed either a statin or an apple a day. Design Comparative proverb assessment modelling study. Setting United Kingdom. Population Adults aged over 50 years. Intervention Either a statin a day for people not already taking a statin or an apple a day for everyone, assuming 70% compliance and no change in calorie consumption. The modelling used routinely available UK population datasets; parameters describing the relations between statins, apples, and health were derived from meta-analyses. Main outcome measure Mortality due to vascular disease. Results The estimated annual reduction in deaths from vascular disease of a statin a day, assuming 70% compliance and a reduction in vascular mortality of 12% (95% confidence interval 9% to 16%) per 1.0 mmol/L reduction in low density lipoprotein cholesterol, is 9400 (7000 to 12 500). The equivalent reduction from an apple a day, modelled using the PRIME model (assuming an apple weighs 100 g and that overall calorie consumption remains constant) is 8500 (95% credible interval 6200 to 10 800). Conclusions Both nutritional and pharmaceutical approaches to the prevention of vascular disease may have the potential to reduce UK mortality significantly. With similar reductions in mortality, a 150 year old health promotion message is able to match modern medicine and is likely to have fewer side effects.

  7. Comparative Analysis of Sectoral Innovation System and Diamond Model: The Case of Telecom Sector of Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezazadeh Mehrizi

    2008-08-01

    Full Text Available Porter’s model of Competitive advantage of nations (named as Diamond Model has been widely used and criticized as well, over recent two decades. On the other hand, non-mainstream economists have tried to propose new frameworks for industrial analysis, that among them, Sectoral Innovation System (SIS is one of the most influential ones. After proposing an assessment framework, we use this framework to compare SIS and Porter’s models and apply them to the case of second mobile operator in Iran. Briefly, SIS model sheds light on the innovation process and competence building and focuses on system failures that are of special importance in the context of developing countries, while Diamond model has the advantage of brining the production process and the influential role of government into focus, but each one has its own shortcomings for analyzing industrial development in developing countries and both of them fail to pay enough attention to foreign relations and international linkages.

  8. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    Directory of Open Access Journals (Sweden)

    Christopher W. Walmsley

    2013-11-01

    Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different

  9. Comparative analysis of detection methods for congenital cytomegalovirus infection in a Guinea pig model.

    Science.gov (United States)

    Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R

    2013-01-01

    To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.

  10. Comparative analysis of elements and models of implementation in local-level spatial plans in Serbia

    Directory of Open Access Journals (Sweden)

    Stefanović Nebojša

    2017-01-01

    Full Text Available Implementation of local-level spatial plans is of paramount importance to the development of the local community. This paper aims to demonstrate the importance of and offer further directions for research into the implementation of spatial plans by presenting the results of a study on models of implementation. The paper describes the basic theoretical postulates of a model for implementing spatial plans. A comparative analysis of the application of elements and models of implementation of plans in practice was conducted based on the spatial plans for the local municipalities of Arilje, Lazarevac and Sremska Mitrovica. The analysis includes four models of implementation: the strategy and policy of spatial development; spatial protection; the implementation of planning solutions of a technical nature; and the implementation of rules of use, arrangement and construction of spaces. The main results of the analysis are presented and used to give recommendations for improving the elements and models of implementation. Final deliberations show that models of implementation are generally used in practice and combined in spatial plans. Based on the analysis of how models of implementation are applied in practice, a general conclusion concerning the complex character of the local level of planning is presented and elaborated. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR 36035: Spatial, Environmental, Energy and Social Aspects of Developing Settlements and Climate Change - Mutual Impacts and Grant no. III 47014: The Role and Implementation of the National Spatial Plan and Regional Development Documents in Renewal of Strategic Research, Thinking and Governance in Serbia

  11. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  12. Comparing stream-specific to generalized temperature models to guide salmonid management in a changing climate

    Science.gov (United States)

    Andrew K. Carlson,; William W. Taylor,; Hartikainen, Kelsey M.; Dana M. Infante,; Beard, Douglas; Lynch, Abigail

    2017-01-01

    Global climate change is predicted to increase air and stream temperatures and alter thermal habitat suitability for growth and survival of coldwater fishes, including brook charr (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss). In a changing climate, accurate stream temperature modeling is increasingly important for sustainable salmonid management throughout the world. However, finite resource availability (e.g. funding, personnel) drives a tradeoff between thermal model accuracy and efficiency (i.e. cost-effective applicability at management-relevant spatial extents). Using different projected climate change scenarios, we compared the accuracy and efficiency of stream-specific and generalized (i.e. region-specific) temperature models for coldwater salmonids within and outside the State of Michigan, USA, a region with long-term stream temperature data and productive coldwater fisheries. Projected stream temperature warming between 2016 and 2056 ranged from 0.1 to 3.8 °C in groundwater-dominated streams and 0.2–6.8 °C in surface-runoff dominated systems in the State of Michigan. Despite their generally lower accuracy in predicting exact stream temperatures, generalized models accurately projected salmonid thermal habitat suitability in 82% of groundwater-dominated streams, including those with brook charr (80% accuracy), brown trout (89% accuracy), and rainbow trout (75% accuracy). In contrast, generalized models predicted thermal habitat suitability in runoff-dominated streams with much lower accuracy (54%). These results suggest that, amidst climate change and constraints in resource availability, generalized models are appropriate to forecast thermal conditions in groundwater-dominated streams within and outside Michigan and inform regional-level salmonid management strategies that are practical for coldwater fisheries managers, policy makers, and the public. We recommend fisheries professionals reserve resource

  13. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    Science.gov (United States)

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  14. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  15. Comparing regional precipitation and temperature extremes in climate model and reanalysis products

    Directory of Open Access Journals (Sweden)

    Oliver Angélil

    2016-09-01

    Full Text Available A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averages over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.

  16. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  17. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  18. Comparative Study of Elastic Network Model and Protein Contact Network for Protein Complexes: The Hemoglobin Case

    Directory of Open Access Journals (Sweden)

    Guang Hu

    2017-01-01

    Full Text Available The overall topology and interfacial interactions play key roles in understanding structural and functional principles of protein complexes. Elastic Network Model (ENM and Protein Contact Network (PCN are two widely used methods for high throughput investigation of structures and interactions within protein complexes. In this work, the comparative analysis of ENM and PCN relative to hemoglobin (Hb was taken as case study. We examine four types of structural and dynamical paradigms, namely, conformational change between different states of Hbs, modular analysis, allosteric mechanisms studies, and interface characterization of an Hb. The comparative study shows that ENM has an advantage in studying dynamical properties and protein-protein interfaces, while PCN is better for describing protein structures quantitatively both from local and from global levels. We suggest that the integration of ENM and PCN would give a potential but powerful tool in structural systems biology.

  19. Comparative Analysis of Pain Behaviours in Humanized Mouse Models of Sickle Cell Anemia.

    Directory of Open Access Journals (Sweden)

    Jianxun Lei

    Full Text Available Pain is a hallmark feature of sickle cell anemia (SCA but management of chronic as well as acute pain remains a major challenge. Mouse models of SCA are essential to examine the mechanisms of pain and develop novel therapeutics. To facilitate this effort, we compared humanized homozygous BERK and Townes sickle mice for the effect of gender and age on pain behaviors. Similar to previously characterized BERK sickle mice, Townes sickle mice show more mechanical, thermal, and deep tissue hyperalgesia with increasing age. Female Townes sickle mice demonstrate more hyperalgesia compared to males similar to that reported for BERK mice and patients with SCA. Mechanical, thermal and deep tissue hyperalgesia increased further after hypoxia/reoxygenation (H/R treatment in Townes sickle mice. Together, these data show BERK sickle mice exhibit a significantly greater degree of hyperalgesia for all behavioral measures as compared to gender- and age-matched Townes sickle mice. However, the genetically distinct "knock-in" strategy of human α and β transgene insertion in Townes mice as compared to BERK mice, may provide relative advantage for further genetic manipulations to examine specific mechanisms of pain.

  20. Comparing different methods to model scenarios of future glacier change for the entire Swiss Alps

    Science.gov (United States)

    Linsbauer, A.; Paul, F.; Haeberli, W.

    2012-04-01

    There is general agreement that observed climate change already has strong impacts on the cryosphere. The rapid shrinkage of glaciers during the past two decades as observed in many mountain ranges globally and in particular in the Alps, are impressive confirmations of a changed climate. With the expected future temperature increase glacier shrinkage will likely further accelerate and their role as an important water resource more and more diminish. To determine the future contribution of glaciers to run-off with hydrological models, the change in glacier area and/or volume must be considered. As these models operate at regional scales, simplified approaches to model the future development of all glaciers in a mountain range need to be applied. In this study we have compared different simplified approaches to model the area and volume evolution of all glaciers in the Swiss Alps over the 21st century according to given climate change scenarios. One approach is based on an upward shift of the ELA (by 150 m per degree temperature increase) and the assumption that the glacier extent will shrink until the smaller accumulation area covers again 60% of the total glacier area. A second approach is based on observed elevation changes between 1985 and 2000 as derived from DEM differencing for all glaciers in Switzerland. With a related elevation-dependent parameterization of glacier thickness change and a modelled glacier thickness distribution, the 15-year trends in observed thickness loss are extrapolated into the future with glacier area loss taking place when thickness becomes zero. The models show an overall glacier area reduction between 60-80% until 2100 with some ice remaining at the highest elevations. However, compared to the ongoing temperature increase and considering that several reinforcement feedbacks (albedo lowering, lake formation) are not accounted for, the real area loss might even be stronger. Uncertainties in the modelled glacier thickness have only a

  1. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    International Nuclear Information System (INIS)

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-01-01

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings

  2. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    Science.gov (United States)

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    Science.gov (United States)

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  4. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhangxing; Li, Yi; Shao, Yimin; Huang, Tianyu; Xu, Hongyi; Li, Yang; Chen, Wei; Zeng, Danielle; Avery, Katherine; Kang, HongTae; Su, Xuming

    2017-04-06

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus and the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.

  5. Comparative analysis of hourly and dynamic power balancing models for validating future energy scenarios

    DEFF Research Database (Denmark)

    Pillai, Jayakrishnan R.; Heussen, Kai; Østergaard, Poul Alberg

    2011-01-01

    Energy system analyses on the basis of fast and simple tools have proven particularly useful for interdisciplinary planning projects with frequent iterations and re-evaluation of alternative scenarios. As such, the tool “EnergyPLAN” is used for hourly balanced and spatially aggregate annual......, the model is verified on the basis of the existing energy mix on Bornholm as an islanded energy system. Future energy scenarios for the year 2030 are analysed to study a feasible technology mix for a higher share of wind power. Finally, the results of the hourly simulations are compared to dynamic frequency...... simulations incorporating the Vehicle-to-grid technology. The results indicate how the EnergyPLAN model may be improved in terms of intra-hour variability, stability and ancillary services to achieve a better reflection of energy and power capacity requirements....

  6. From neurons to nests: nest-building behaviour as a model in behavioural and comparative neuroscience.

    Science.gov (United States)

    Hall, Zachary J; Meddle, Simone L; Healy, Susan D

    Despite centuries of observing the nest building of most extant bird species, we know surprisingly little about how birds build nests and, specifically, how the avian brain controls nest building. Here, we argue that nest building in birds may be a useful model behaviour in which to study how the brain controls behaviour. Specifically, we argue that nest building as a behavioural model provides a unique opportunity to study not only the mechanisms through which the brain controls behaviour within individuals of a single species but also how evolution may have shaped the brain to produce interspecific variation in nest-building behaviour. In this review, we outline the questions in both behavioural and comparative neuroscience that nest building could be used to address, summarize recent findings regarding the neurobiology of nest building in lab-reared zebra finches and across species building different nest structures, and suggest some future directions for the neurobiology of nest building.

  7. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  8. How do farm models compare when estimating greenhouse gas emissions from dairy cattle production?

    DEFF Research Database (Denmark)

    Hutchings, Nicholas John; Özkan, Şeyda; de Haan, M

    2018-01-01

    The European Union Effort Sharing Regulation (ESR) will require a 30% reduction in greenhouse gas (GHG) emissions by 2030 compared with 2005 from the sectors not included in the European Emissions Trading Scheme, including agriculture. This will require the estimation of current and future...... from four farm-scale models (DairyWise, FarmAC, HolosNor and SFARMMOD) were calculated for eight dairy farming scenarios within a factorial design consisting of two climates (cool/dry and warm/wet)×two soil types (sandy and clayey)×two feeding systems (grass only and grass/maize). The milk yield per...

  9. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  10. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  11. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  12. In silico models for predicting ready biodegradability under REACH: a comparative study.

    Science.gov (United States)

    Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio

    2013-10-01

    REACH (Registration Evaluation Authorization and restriction of Chemicals) legislation is a new European law which aims to raise the human protection level and environmental health. Under REACH all chemicals manufactured or imported for more than one ton per year must be evaluated for their ready biodegradability. Ready biodegradability is also used as a screening test for persistent, bioaccumulative and toxic (PBT) substances. REACH encourages the use of non-testing methods such as QSAR (quantitative structure-activity relationship) models in order to save money and time and to reduce the number of animals used for scientific purposes. Some QSAR models are available for predicting ready biodegradability. We used a dataset of 722 compounds to test four models: VEGA, TOPKAT, BIOWIN 5 and 6 and START and compared their performance on the basis of the following parameters: accuracy, sensitivity, specificity and Matthew's correlation coefficient (MCC). Performance was analyzed from different points of view. The first calculation was done on the whole dataset and VEGA and TOPKAT gave the best accuracy (88% and 87% respectively). Then we considered the compounds inside and outside the training set: BIOWIN 6 and 5 gave the best results for accuracy (81%) outside training set. Another analysis examined the applicability domain (AD). VEGA had the highest value for compounds inside the AD for all the parameters taken into account. Finally, compounds outside the training set and in the AD of the models were considered to assess predictive ability. VEGA gave the best accuracy results (99%) for this group of chemicals. Generally, START model gave poor results. Since BIOWIN, TOPKAT and VEGA models performed well, they may be used to predict ready biodegradability. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    Science.gov (United States)

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  14. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions.

    Science.gov (United States)

    Haney, Nora M; Nguyen, Hoang M T; Honda, Matthew; Abdel-Mageed, Asim B; Hellstrom, Wayne J G

    2018-04-01

    It is common for men to develop erectile dysfunction after radical prostatectomy. The anatomy of the rat allows the cavernous nerve (CN) to be identified, dissected, and injured in a controlled fashion. Therefore, bilateral CN injury (BCNI) in the rat model is routinely used to study post-prostatectomy erectile dysfunction. To compare and contrast the available literature on pharmacologic intervention after BCNI in the rat. A literature search was performed on PubMed for cavernous nerve and injury and erectile dysfunction and rat. Only articles with BCNI and pharmacologic intervention that could be grouped into categories of immune modulation, growth factor therapy, receptor kinase inhibition, phosphodiesterase type 5 inhibition, and anti-inflammatory and antifibrotic interventions were included. To assess outcomes of pharmaceutical intervention on erectile function recovery after BCNI in the rat model. The ratio of maximum intracavernous pressure to mean arterial pressure was the main outcome measure chosen for this analysis. All interventions improved erectile function recovery after BCNI based on the ratio of maximum intracavernous pressure to mean arterial pressure results. Additional end-point analysis examined the corpus cavernosa and/or the major pelvic ganglion and CN. There was extreme heterogeneity within the literature, making accurate comparisons between crush injury and therapeutic interventions difficult. BCNI in the rat is the accepted animal model used to study nerve-sparing post-prostatectomy erectile dysfunction. However, an important limitation is extreme variability. Efforts should be made to decrease this variability and increase the translational utility toward clinical trials in humans. Haney NM, Nguyen HMT, Honda M, et al. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions. Sex Med Rev 2018;6:234-241. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier

  15. Comparative molecular analysis of early and late cancer cachexia-induced muscle wasting in mouse models.

    Science.gov (United States)

    Sun, Rulin; Zhang, Santao; Lu, Xing; Hu, Wenjun; Lou, Ning; Zhao, Yan; Zhou, Jia; Zhang, Xiaoping; Yang, Hongmei

    2016-12-01

    Cancer-induced muscle wasting, which commonly occurs in cancer cachexia, is characterized by impaired quality of life and poor patient survival. To identify an appropriate treatment, research on the mechanism underlying muscle wasting is essential. Thus far, studies on muscle wasting using cancer cachectic models have generally focused on early cancer cachexia (ECC), before severe body weight loss occurs. In the present study, we established models of ECC and late cancer cachexia (LCC) and compared different stages of cancer cachexia using two cancer cachectic mouse models induced by colon-26 (C26) adenocarcinoma or Lewis lung carcinoma (LLC). In each model, tumor-bearing (TB) and control (CN) mice were injected with cancer cells and PBS, respectively. The TB and CN mice, which were euthanized on the 24th day or the 36th day after injection, were defined as the ECC and ECC-CN mice or the LCC and LCC-CN mice. In addition, the tissues were harvested and analyzed. We found that both the ECC and LCC mice developed cancer cachexia. The amounts of muscle loss differed between the ECC and LCC mice. Moreover, the expression of some molecules was altered in the muscles from the LCC mice but not in those from the ECC mice compared with their CN mice. In conclusion, the molecules with altered expression in the muscles from the ECC and LCC mice were not exactly the same. These findings may provide some clues for therapy which could prevent the muscle wasting in cancer cachexia from progression to the late stage.

  16. Comparative modelling and molecular docking of nitrate reductase from Bacillus weihenstephanensis (DS45

    Directory of Open Access Journals (Sweden)

    R. Seenivasagan

    2016-07-01

    Full Text Available Nitrate reductase catalyses the oxidation of NAD(PH and the reduction of nitrate to nitrite. NR serves as a central point for the integration of metabolic pathways by governing the flux of reduced nitrogen through several regulatory mechanisms in plants, algae and fungi. Bacteria express nitrate reductases that convert nitrate to nitrite, but mammals lack these specific enzymes. The microbial nitrate reductase reduces toxic compounds to nontoxic compounds with the help of NAD(PH. In the present study, our results revealed that Bacillus weihenstephanensis expresses a nitrate reductase enzyme, which was made to generate the 3D structure of the enzyme. Six different modelling servers, namely Phyre2, RaptorX, M4T Server, HHpred, SWISS MODEL and Mod Web, were used for comparative modelling of the structure. The model was validated with standard parameters (PROCHECK and Verify 3D. This study will be useful in the functional characterization of the nitrate reductase enzyme and its docking with nitrate molecules, as well as for use with autodocking.

  17. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  18. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    Science.gov (United States)

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  20. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  1. Properties of the vacuum in models for QCD. Holography vs. resummed field theory. A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Zayakin, Andrey V.

    2011-01-17

    This Thesis is dedicated to a comparison of the two means of studying the electromagnetic properties of the QCD vacuum - holography and resummed field theory. I compare two classes of distinct models for the dynamics of the condensates. The first class consists of the so-called holographic models of QCD. Based upon the Maldacena conjecture, it tries to establish the properties of QCD correlation functions from the behavior of classical solutions of field equations in a higher-dimensional theory. Yet in many aspects the holographic approach has been found to be in an excellent agreement with data. These successes are the prediction of the very small viscosity-to-entropy ratio and the predictions of meson spectra up to 5% accuracy in several models. On the other hand, the resummation methods in field theory have not been discarded so far. Both classes of methods have access to condensates. Thus a comprehensive study of condensates becomes possible, in which I compare my calculations in holography and resummed field theory with each other, as well as with lattice results, field theory and experiment. I prove that the low-energy theorems of QCD keep their validity in holographic models with a gluon condensate in a non-trivial way. I also show that the so-called decoupling relation holds in holography models with chiral and gluon condensates, whereas this relation fails in the Dyson-Schwinger approach. On the contrary, my results on the chiral magnetic effect in holography disagree with the weak-field prediction; the chiral magnetic effect (that is, the electric current generation in a magnetic field) is three times less than the current in the weakly-coupled QCD. The chiral condensate behavior is found to be quadratic in external field both in the Dyson-Schwinger approach and in holography, yet we know that in the exact limit the condensate must be linear, thus both classes of models are concluded to be deficient for establishing the correct condensate behaviour in the

  2. Properties of the vacuum in models for QCD. Holography vs. resummed field theory. A comparative study

    International Nuclear Information System (INIS)

    Zayakin, Andrey V.

    2011-01-01

    This Thesis is dedicated to a comparison of the two means of studying the electromagnetic properties of the QCD vacuum - holography and resummed field theory. I compare two classes of distinct models for the dynamics of the condensates. The first class consists of the so-called holographic models of QCD. Based upon the Maldacena conjecture, it tries to establish the properties of QCD correlation functions from the behavior of classical solutions of field equations in a higher-dimensional theory. Yet in many aspects the holographic approach has been found to be in an excellent agreement with data. These successes are the prediction of the very small viscosity-to-entropy ratio and the predictions of meson spectra up to 5% accuracy in several models. On the other hand, the resummation methods in field theory have not been discarded so far. Both classes of methods have access to condensates. Thus a comprehensive study of condensates becomes possible, in which I compare my calculations in holography and resummed field theory with each other, as well as with lattice results, field theory and experiment. I prove that the low-energy theorems of QCD keep their validity in holographic models with a gluon condensate in a non-trivial way. I also show that the so-called decoupling relation holds in holography models with chiral and gluon condensates, whereas this relation fails in the Dyson-Schwinger approach. On the contrary, my results on the chiral magnetic effect in holography disagree with the weak-field prediction; the chiral magnetic effect (that is, the electric current generation in a magnetic field) is three times less than the current in the weakly-coupled QCD. The chiral condensate behavior is found to be quadratic in external field both in the Dyson-Schwinger approach and in holography, yet we know that in the exact limit the condensate must be linear, thus both classes of models are concluded to be deficient for establishing the correct condensate behaviour in the

  3. DISSECTING GALAXY FORMATION. II. COMPARING SUBSTRUCTURE IN PURE DARK MATTER AND BARYONIC MODELS

    International Nuclear Information System (INIS)

    Romano-Diaz, Emilio; Shlosman, Isaac; Heller, Clayton; Hoffman, Yehuda

    2010-01-01

    We compare the substructure evolution in pure dark matter (DM) halos with those in the presence of baryons, hereafter PDM and BDM models, respectively. The prime halos have been analyzed in the previous work. Models have been evolved from identical initial conditions which have been constructed by means of the constrained realization method. The BDM model includes star formation and feedback from stellar evolution onto the gas. A comprehensive catalog of subhalo populations has been compiled and individual and statistical properties of subhalos analyzed, including their orbital differences. We find that subhalo population mass functions in PDM and BDM are consistent with a single power law, M α sbh , for each of the models in the mass range of ∼2 x 10 8 M sun -2 x 10 11 M sun . However, we detect a nonnegligible shift between these functions, the time-averaged α ∼ -0.86 for the PDM and -0.98 for the BDM models. Overall, α appears to be a nearly constant in time, with variations of ±15%. Second, we find that the radial mass distribution of subhalo populations can be approximated by a power law, R γ sbh with a steepening that occurs at the radius of a maximal circular velocity, R vmax , in the prime halos. Here we find that γ sbh ∼ -1.5 for the PDM and -1 for the BDM models, when averaged over time inside R vmax . The slope is steeper outside this region and approaches -3. We detect little spatial bias (less than 10%) between the subhalo populations and the DM distribution of the main halos. Also, the subhalo population exhibits much less triaxiality in the presence of baryons, in tandem with the shape of the prime halo. Finally, we find that, counter-intuitively, the BDM population is depleted at a faster rate than the PDM one within the central 30 kpc of the prime halo. The reason for this is that although the baryons provide a substantial glue to the subhalos, the main halo exhibits the same trend. This assures a more efficient tidal disruption of the

  4. A comparative study in the UNCITRAL model law about the independence of the arbitration clause

    Directory of Open Access Journals (Sweden)

    Atefeh Darami Zadeh

    2018-02-01

    Full Text Available The aim of the paper was to investigate the independence of the arbitration clause from the main contract in the International Commercial Arbitration Law of Iran with a comparative study in the UNCITRAL model law. The effectiveness of this type of procedure, its coordination with the specific objectives and the special status of international traders has led to their increasing willingness to use this legal solution. We use a comparative method, quasi-experimental, to describe similarities and differences in variables in two or more existing groups in a natural setting; it resembles an experiment as it uses manipulation but lacks random assignment of individual subjects.  This study begins analyzing international arbitration and the UNCITRAL model rules (Chapters I to VI, then reviewing the national arbitration (Chapter V; thus, the effects of the principle of independence of the arbitration clause can be seen (Chapter VII and, later, the problems that arise (Chapters VIII to X. Even so, the main conclusion is that the parties usually agree to resolve their international disputes through arbitration, which is judged privately and universally accepted.

  5. A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model

    Science.gov (United States)

    Riette, Sébastien; Lac, Christine

    2016-08-01

    In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.

  6. Tumor affinity of radiolabeled peanut agglutinin compared with that of Ga-67 citrate in animal models

    International Nuclear Information System (INIS)

    Yokoyama, K.; Aburano, T.; Watanabe, N.; Kawabata, S.; Ishida, H.; Mukai, K.; Tonami, N.; Hisada, K.

    1985-01-01

    Peanut agglutinin (PNA) binds avidly to the immunodominant group of the tumor associated T antigen. The purpose of this study was to evaluate oncodiagnostic potential of radiolabeled PNA in animal models. PNA was labeled with I-125 or I-131 by Iodogen and also with In-111 by cyclic DTPA anhydride. The biological activity of PNA was examined by a hemaglutination titer with a photometer before and after labeling. Animal tumor models used were Lewis Lung Cancer(LLC), B-16 Melanotic Melanoma(MM), Yoshida Sarcoma(YS), Ehrlich Ascites Tumor(EAT and Hepatoma AH109A(HAH). Inflammatory tissue induced by turpentine oil was used as an abscess model. Serial scintigraphic images were obtained following IV injections of 100 μCi of I-131 or In-111-DTPA-PNA. The tumor affinity of Ga-67 citrate was studied to compare that of radiolabeled PNA. Tissue biodistribution was studied in EAT bearing mice. All of these tumor models except HAH were clearly visible by radiolabeled PNA without subtraction techniques. In the models of LLC and EAT, PNA showed the better accumulation into the tumor tissue than Ga-67 citrate. In YS and MM, PNA represented almost the same accumulation as Ga-67 citrate. The localization of PNA into abscess tissue wasn't found although Ga-67 citrate markedly accumulated into abscess tissue as well as tumor tissue. The clearance of PNA from tumor was slower than those from any other organs. Tumor to muscle ratio was 5.1 at 48hrs. and tumor to blood ratio increased with time to 2.3 at 96hrs. These results suggested that radiolabeled PNA may have a potential in the detection of tumor

  7. Comparing potential recharge estimates from three Land Surface Models across the Western US

    Science.gov (United States)

    NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.

    2018-01-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845

  8. A comparative investigation of 18F kinetics in receptors: a compartment model analysis

    International Nuclear Information System (INIS)

    Tiwari, Anjani K.; Swatantra; Kaushik, A.; Mishra, A.K.

    2010-01-01

    Full text: Some authors reported that 18 F kinetics might be useful for evaluation of neuro receptors. We hypothesized that 18 F kinetics may show some information about neuronal damage, and each rate constant might have statistically significant correlation with WO function. The purpose of this study was to investigate 99m Tc MIBI kinetics through a compartment model analysis. Each rate constant from compartment analysis was compared with WO, T1/2, and (H/M) ratio in early and delayed phase. Different animal model were studied. After an injection the dynamic planar imaging was performed on a dual-headed digital gamma camera system for 30 minutes. An ROI was drawn manually to assess the global kinetics of 18 F. By using the time-activity curve (TAC) of ROI as a response tissue function and the TAC of Aorta as an input function, we analysed 18 F pharmacokinetics through a 2-compartment model. We defined k1 as influx rate constant, k2 as out flux rate constant and k3 as specific uptake rate constant. And we calculated k1/k2 as distribution volume (Vd), k1k3/k2 as specific uptake (SU), and k1k3/(k2+k3) as clearance. For non-competitive affinity studies of PET two modelling parameters distribution volume (DV) and Bmax / Kd are also calculated. Results: Statistically significant correlations were seen between k2 and T1/2 (P 18 F at the injection had relation to the uptake of it at 30 minutes and 2 hours after the injection. Furthermore, some indexes had statistically significant correlation with DV and Bmax. These compartment model approaches may be useful to estimate the other related studies

  9. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  10. Interannual sedimentary effluxes of alkalinity in the southern North Sea: model results compared with summer observations

    Directory of Open Access Journals (Sweden)

    J. Pätsch

    2018-06-01

    Full Text Available For the sediments of the central and southern North Sea different sources of alkalinity generation are quantified by a regional modelling system for the period 2000–2014. For this purpose a formerly global ocean sediment model coupled with a pelagic ecosystem model is adapted to shelf sea dynamics, where much larger turnover rates than in the open and deep ocean occur. To track alkalinity changes due to different nitrogen-related processes, the open ocean sediment model was extended by the state variables particulate organic nitrogen (PON and ammonium. Directly measured alkalinity fluxes and those derived from Ra isotope flux observation from the sediment into the pelagic are reproduced by the model system, but calcite building and calcite dissolution are underestimated. Both fluxes cancel out in terms of alkalinity generation and consumption. Other simulated processes altering alkalinity in the sediment, like net sulfate reduction, denitrification, nitrification, and aerobic degradation, are quantified and compare well with corresponding fluxes derived from observations. Most of these fluxes exhibit a strong positive gradient from the open North Sea to the coast, where large rivers drain nutrients and organic matter. Atmospheric nitrogen deposition also shows a positive gradient from the open sea towards land and supports alkalinity generation in the sediments. An additional source of spatial variability is introduced by the use of a 3-D heterogenous porosity field. Due to realistic porosity variations (0.3–0.5 the alkalinity fluxes vary by about 4 %. The strongest impact on interannual variations of alkalinity fluxes is exhibited by the temporal varying nitrogen inputs from large rivers directly governing the nitrate concentrations in the coastal bottom water, thus providing nitrate necessary for benthic denitrification. Over the time investigated the alkalinity effluxes decrease due to the decrease in the nitrogen supply by the rivers.

  11. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  12. Comparative study of non-premixed and partially-premixed combustion simulations in a realistic Tay model combustor

    OpenAIRE

    Zhang, K.; Ghobadian, A.; Nouri, J. M.

    2017-01-01

    A comparative study of two combustion models based on non-premixed assumption and partially premixed assumptions using the overall models of Zimont Turbulent Flame Speed Closure Method (ZTFSC) and Extended Coherent Flamelet Method (ECFM) are conducted through Reynolds stress turbulence modelling of Tay model gas turbine combustor for the first time. The Tay model combustor retains all essential features of a realistic gas turbine combustor. It is seen that the non-premixed combustion model fa...

  13. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    Science.gov (United States)

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  14. Creating a Common Data Model for Comparative Effectiveness with the Observational Medical Outcomes Partnership.

    Science.gov (United States)

    FitzHenry, F; Resnic, F S; Robbins, S L; Denton, J; Nookala, L; Meeker, D; Ohno-Machado, L; Matheny, M E

    2015-01-01

    Adoption of a common data model across health systems is a key infrastructure requirement to allow large scale distributed comparative effectiveness analyses. There are a growing number of common data models (CDM), such as Mini-Sentinel, and the Observational Medical Outcomes Partnership (OMOP) CDMs. In this case study, we describe the challenges and opportunities of a study specific use of the OMOP CDM by two health systems and describe three comparative effectiveness use cases developed from the CDM. The project transformed two health system databases (using crosswalks provided) into the OMOP CDM. Cohorts were developed from the transformed CDMs for three comparative effectiveness use case examples. Administrative/billing, demographic, order history, medication, and laboratory were included in the CDM transformation and cohort development rules. Record counts per person month are presented for the eligible cohorts, highlighting differences between the civilian and federal datasets, e.g. the federal data set had more outpatient visits per person month (6.44 vs. 2.05 per person month). The count of medications per person month reflected the fact that one system's medications were extracted from orders while the other system had pharmacy fills and medication administration records. The federal system also had a higher prevalence of the conditions in all three use cases. Both systems required manual coding of some types of data to convert to the CDM. The data transformation to the CDM was time consuming and resources required were substantial, beyond requirements for collecting native source data. The need to manually code subsets of data limited the conversion. However, once the native data was converted to the CDM, both systems were then able to use the same queries to identify cohorts. Thus, the CDM minimized the effort to develop cohorts and analyze the results across the sites.

  15. Comparative and Evolutionary Analysis of Grass Pollen Allergens Using Brachypodium distachyon as a Model System.

    Directory of Open Access Journals (Sweden)

    Akanksha Sharma

    Full Text Available Comparative genomics have facilitated the mining of biological information from a genome sequence, through the detection of similarities and differences with genomes of closely or more distantly related species. By using such comparative approaches, knowledge can be transferred from the model to non-model organisms and insights can be gained in the structural and evolutionary patterns of specific genes. In the absence of sequenced genomes for allergenic grasses, this study was aimed at understanding the structure, organisation and expression profiles of grass pollen allergens using the genomic data from Brachypodium distachyon as it is phylogenetically related to the allergenic grasses. Combining genomic data with the anther RNA-Seq dataset revealed 24 pollen allergen genes belonging to eight allergen groups mapping on the five chromosomes in B. distachyon. High levels of anther-specific expression profiles were observed for the 24 identified putative allergen-encoding genes in Brachypodium. The genomic evidence suggests that gene encoding the group 5 allergen, the most potent trigger of hay fever and allergic asthma originated as a pollen specific orphan gene in a common grass ancestor of Brachypodium and Triticiae clades. Gene structure analysis showed that the putative allergen-encoding genes in Brachypodium either lack or contain reduced number of introns. Promoter analysis of the identified Brachypodium genes revealed the presence of specific cis-regulatory sequences likely responsible for high anther/pollen-specific expression. With the identification of putative allergen-encoding genes in Brachypodium, this study has also described some important plant gene families (e.g. expansin superfamily, EF-Hand family, profilins etc for the first time in the model plant Brachypodium. Altogether, the present study provides new insights into structural characterization and evolution of pollen allergens and will further serve as a base for their

  16. Comparing planar image quality of rotating slat and parallel hole collimation: influence of system modeling

    International Nuclear Information System (INIS)

    Holen, Roel van; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace

    2008-01-01

    The main remaining challenge for a gamma camera is to overcome the existing trade-off between collimator spatial resolution and system sensitivity. This problem, strongly limiting the performance of parallel hole collimated gamma cameras, can be overcome by applying new collimator designs such as rotating slat (RS) collimators which have a much higher photon collection efficiency. The drawback of a RS collimated gamma camera is that, even for obtaining planar images, image reconstruction is needed, resulting in noise accumulation. However, nowadays iterative reconstruction techniques with accurate system modeling can provide better image quality. Because the impact of this modeling on image quality differs from one system to another, an objective assessment of the image quality obtained with a RS collimator is needed in comparison to classical projection images obtained using a parallel hole (PH) collimator. In this paper, a comparative study of image quality, achieved with system modeling, is presented. RS data are reconstructed to planar images using maximum likelihood expectation maximization (MLEM) with an accurate Monte Carlo derived system matrix while PH projections are deconvolved using a Monte Carlo derived point-spread function. Contrast-to-noise characteristics are used to show image quality for cold and hot spots of varying size. Influence of the object size and contrast is investigated using the optimal contrast-to-noise ratio (CNR o ). For a typical phantom setup, results show that cold spot imaging is slightly better for a PH collimator. For hot spot imaging, the CNR o of the RS images is found to increase with increasing lesion diameter and lesion contrast while it decreases when background dimensions become larger. Only for very large background dimensions in combination with low contrast lesions, the use of a PH collimator could be beneficial for hot spot imaging. In all other cases, the RS collimator scores better. Finally, the simulation of a

  17. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  18. sbml-diff: A Tool for Visually Comparing SBML Models in Synthetic Biology.

    Science.gov (United States)

    Scott-Brown, James; Papachristodoulou, Antonis

    2017-07-21

    We present sbml-diff, a tool that is able to read a model of a biochemical reaction network in SBML format and produce a range of diagrams showing different levels of detail. Each diagram type can be used to visualize a single model or to visually compare two or more models. The default view depicts species as ellipses, reactions as rectangles, rules as parallelograms, and events as diamonds. A cartoon view replaces the symbols used for reactions on the basis of the associated Systems Biology Ontology terms. An abstract view represents species as ellipses and draws edges between them to indicate whether a species increases or decreases the production or degradation of another species. sbml-diff is freely licensed under the three-clause BSD license and can be downloaded from https://github.com/jamesscottbrown/sbml-diff and used as a python package called from other software, as a free-standing command-line application, or online using the form at http://sysos.eng.ox.ac.uk/tebio/upload.

  19. Comparative study between 2 methods of mounting models in semiadjustable articulator for orthognathic surgery.

    Science.gov (United States)

    Mayrink, Gabriela; Sawazaki, Renato; Asprino, Luciana; de Moraes, Márcio; Fernandes Moreira, Roger William

    2011-11-01

    Compare the traditional method of mounting dental casts on a semiadjustable articulator and the new method suggested by Wolford and Galiano, 1 analyzing the inclination of maxillary occlusal plane in relation to FHP. Two casts of 10 patients were obtained. One of them was used for mounting of models on a traditional articulator, by using a face bow transfer system and the other one was used to mounting models at Occlusal Plane Indicator platform (OPI), using the SAM articulator. After that, na analysis of the accuracy of mounting models was performed. The angle made by de occlusal plane and FHP on the cephalogram should be equal the angle between the occlusal plane and the upper member of the articulator. The measures were tabulated in Microsoft Excell(®) and calculated using a 1-way analysis variance. Statistically, the results did not reveal significant differences among the measures. OPI and face bow presents similar results but more studies are needed to verify its accuracy relative to the maxillary cant in OPI or develop new techniques able to solve the disadvantages of each technique. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  20. The FluxCompensator: Making Radiative Transfer Models of Hydrodynamical Simulations Directly Comparable to Real Observations

    Energy Technology Data Exchange (ETDEWEB)

    Koepferl, Christine M.; Robitaille, Thomas P., E-mail: koepferl@usm.lmu.de [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany)

    2017-11-01

    When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory . Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.

  1. The FluxCompensator: Making Radiative Transfer Models of Hydrodynamical Simulations Directly Comparable to Real Observations

    Science.gov (United States)

    Koepferl, Christine M.; Robitaille, Thomas P.

    2017-11-01

    When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.

  2. Comparing the relative cost-effectiveness of diagnostic studies: a new model

    International Nuclear Information System (INIS)

    Patton, D.D.; Woolfenden, J.M.; Wellish, K.L.

    1986-01-01

    We have developed a model to compare the relative cost-effectiveness of two or more diagnostic tests. The model defines a cost-effectiveness ratio (CER) for a diagnostic test as the ratio of effective cost to base cost, only dollar costs considered. Effective cost includes base cost, cost of dealing with expected side effects, and wastage due to imperfect test performance. Test performance is measured by diagnostic utility (DU), a measure of test outcomes incorporating the decision-analytic variables sensitivity, specificity, equivocal fraction, disease probability, and outcome utility. Each of these factors affecting DU, and hence CER, is a local, not universal, value; these local values strongly affect CER, which in effect becomes a property of the local medical setting. When DU = +1 and there are no adverse effects, CER = 1 and the patient benefits from the test dollar for dollar. When there are adverse effects effective cost exceeds base cost, and for an imperfect test DU 1. As DU approaches 0 (worthless test), CER approaches infinity (no effectiveness at any cost). If DU is negative, indicating that doing the test at all would be detrimental, CER also becomes negative. We conclude that the CER model is a useful preliminary method for ranking the relative cost-effectiveness of diagnostic tests, and that the comparisons would best be done using local values; different groups might well arrive at different rankings. (Author)

  3. Determining the Best Arch/Garch Model and Comparing JKSE with Stock Index in Developed Countries

    Directory of Open Access Journals (Sweden)

    Kharisya Ayu Effendi

    2015-09-01

    Full Text Available The slow movement of Indonesia economic growth in 2014 due to several factors, in internal factors; due to the high interest rates in Indonesia and external factors from the US which will raise the fed rate this year. However, JKSE shows a sharp increase trend from the beginning of 2014 until the second quarter of 2015 although it remains fluctuate but insignificant. The purpose of this research is to determine the best ARCH/ GARCH model in JKSE and stock index in developed countries (FTSE, Nasdaq and STI and then compare the JKSE with the stock index in developed countries (FTSE, Nasdaq and STI. The results obtained in this study is to determine the best model of ARCH / GARCH, it is obtained that JKSE is GARCH (1,2, while the FTSE obtains GARCH (2,2, NASDAQ produces the best model which is GARCH (1,1 and STI with GARCH (2,1, and the results of the comparison of JKSE with FTSE, NASDAQ and STI are that even though JKSE fluctuates with moderate levels but the trend shown upward trend. This is different with other stock indexes fluctuated highly and tends to have a downward trend.

  4. A comparative behavioural study of mechanical hypersensitivity in 2 pain models in rats and humans.

    Science.gov (United States)

    Reitz, Marie-Céline; Hrncic, Dragan; Treede, Rolf-Detlef; Caspani, Ombretta

    2016-06-01

    The assessment of pain sensitivity in humans has been standardized using quantitative sensory testing, whereas in animals mostly paw withdrawal thresholds to diverse stimuli are measured. This study directly compares tests used in quantitative sensory testing (pinpricks, pressure algometer) with tests used in animal studies (electronic von Frey test: evF), which we applied to the dorsal hind limbs of humans after high frequency stimulation and rats after tibial nerve transection. Both experimental models induce profound mechanical hypersensitivity. At baseline, humans and rats showed a similar sensitivity to evF with 0.2 mm diameter tips, but significant differences for other test stimuli (all P pain models (P pain sensitivity, but probe size and shape should be standardized. Hypersensitivity to blunt pressure-the leading positive sensory sign after peripheral nerve injury in humans-is a novel finding in the tibial nerve transection model. By testing outside the primary zone of nerve damage (rat) or activation (humans), our methods likely involve effects of central sensitization in both species.

  5. Comparing Reasons for Quitting Substance Abuse with the Constructs of Behavioral Models: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Hamid Tavakoli Ghouchani

    2015-03-01

    Full Text Available Background and Objectives: The world population has reached over seven billion people. Of these, 230 million individuals abuse substances. Therefore, substance abuse prevention and treatment programs have received increasing attention during the past two decades. Understanding people’s motivations for quitting drug abuse is essential to the success of treatment. This study hence sought to identify major motivations for quitting and to compare them with the constructs of health education models. Materials and Methods: In the present study, qualitative content analysis was used to determine the main motivations for quitting substance abuse. Overall, 22 patients, physicians, and psychotherapists were selected from several addiction treatment clinics in Bojnord (Iran during 2014. Purposeful sampling method was applied and continued until data saturation was achieved. Data were collected through semi-structured, face-to-face interviews and field notes. All interviews were recorded and transcribed. Results: Content analysis revealed 33 sub-categories and nine categories including economic problems, drug-related concerns, individual problems, family and social problems, family expectations, attention to social status, beliefs about drug addiction, and valuing the quitting behavior. Accordingly, four themes, i.e. perceived threat, perceived barriers, attitude toward the behavior, and subjective norms, were extracted. Conclusion: Reasons for quitting substance abuse match the constructs of different behavioral models (e.g. the health belief model and the theory of planned behavior.

  6. Comparative analysis of different methods in mathematical modelling of the recuperative heat exchangers

    International Nuclear Information System (INIS)

    Debeljkovic, D.Lj.; Stevic, D.Z.; Simeunovic, G.V.; Misic, M.A.

    2015-01-01

    The heat exchangers are frequently used as constructive elements in various plants and their dynamics is very important. Their operation is usually controlled by manipulating inlet fluid temperatures or mass flow rates. On the basis of the accepted and critically clarified assumptions, a linearized mathematical model of the cross-flow heat exchanger has been derived, taking into account the wall dynamics. The model is based on the fundamental law of energy conservation, covers all heat accumulation storages in the process, and leads to the set of partial differential equations (PDE), which solution is not possible in closed form. In order to overcome the solutions difficulties in this paper are analyzed different methods for modeling the heat exchanger: approach based on Laplace transformation, approximation of partial differential equations based on finite differences, the method of physical discretization and the transport approach. Specifying the input temperatures and output variables, under the constant initial conditions, the step transient responses have been simulated and presented in graphic form in order to compare these results for the four characteristic methods considered in this paper, and analyze its practical significance. (author)

  7. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  8. COMPARATIVE MODELLING AND LIGAND BINDING SITE PREDICTION OF A FAMILY 43 GLYCOSIDE HYDROLASE FROM Clostridium thermocellum

    Directory of Open Access Journals (Sweden)

    Shadab Ahmed

    2012-06-01

    Full Text Available The phylogenetic analysis of Clostridium thermocellum family 43 glycoside hydrolase (CtGH43 showed close evolutionary relation with carbohydrate binding family 6 proteins from C. cellulolyticum, C. papyrosolvens, C. cellulyticum, and A. cellulyticum. Comparative modeling of CtGH43 was performed based on crystal structures with PDB IDs 3C7F, 1YIF, 1YRZ, 2EXH and 1WL7. The structure having lowest MODELLER objective function was selected. The three-dimensional structure revealed typical 5-fold beta–propeller architecture. Energy minimization and validation of predicted model with VERIFY 3D indicated acceptability of the proposed atomic structure. The Ramachandran plot analysis by RAMPAGE confirmed that family 43 glycoside hydrolase (CtGH43 contains little or negligible segments of helices. It also showed that out of 301 residues, 267 (89.3% were in most favoured region, 23 (7.7% were in allowed region and 9 (3.0% were in outlier region. IUPred analysis of CtGH43 showed no disordered region. Active site analysis showed presence of two Asp and one Glu, assumed to form a catalytic triad. This study gives us information about three-dimensional structure and reaffirms the fact that it has the similar core 5-fold beta–propeller architecture and so probably has the same inverting mechanism of action with the formation of above mentioned catalytic triad for catalysis of polysaccharides.

  9. Comparative study of two models of combined pulmonary fibrosis and emphysema in mice.

    Science.gov (United States)

    Zhang, Wan-Guang; Wu, Si-Si; He, Li; Yang, Qun; Feng, Yi-Kuan; Chen, Yue-Tao; Zhen, Guo-Hua; Xu, Yong-Jian; Zhang, Zhen-Xiang; Zhao, Jian-Ping; Zhang, Hui-Lan

    2017-04-01

    Combined pulmonary fibrosis and emphysema (CPFE) is an "umbrella term" encompassing emphysema and pulmonary fibrosis, but its pathogenesis is not known. We established two models of CPFE in mice using tracheal instillation with bleomycin (BLM) or murine gammaherpesvirus 68 (MHV-68). Experimental mice were divided randomly into four groups: A (normal control, n=6), B (emphysema, n=6), C (emphysema+MHV-68, n=24), D (emphysema+BLM, n=6). Group C was subdivided into four groups: C1 (sacrificed on day 367, 7 days after tracheal instillation of MHV-68); C2 (day 374; 14days); C3 (day 381; 21days); C4 (day 388; 28days). Conspicuous emphysema and interstitial fibrosis were observed in BLM and MHV-68 CPFE mouse models. However, BLM induced diffuse pulmonary interstitial fibrosis with severely diffuse pulmonary inflammation; MHV-68 induced relatively modest inflammation and fibrosis, and the inflammation and fibrosis were not diffuse, but instead around bronchioles. Inflammation and fibrosis were detectable in the day-7 subgroup and reached a peak in the day-28 subgroup in the emphysema + MHV-68 group. Levels of macrophage chemoattractant protein-1, macrophage inflammatory protein-1α, interleukin-13, and transforming growth factor-β1 in bronchoalveolar lavage fluid were increased significantly in both models. Percentage of apoptotic type-2 lung epithelial cells was significantly higher; however, all four types of cytokine and number of macrophages were significantly lower in the emphysema+MHV-68 group compared with the emphysema +BLM group. The different changes in pathology between BLM and MHV-68 mice models demonstrated different pathology subtypes of CPFE: macrophage infiltration and apoptosis of type-II lung epithelial cells increased with increasing pathology score for pulmonary fibrosis. Copyright © 2017 Elsevier GmbH. All rights reserved.

  10. Comparing cycling world hour records, 1967-1996: modeling with empirical data.

    Science.gov (United States)

    Bassett, D R; Kyle, C R; Passfield, L; Broker, J P; Burke, E R

    1999-11-01

    The world hour record in cycling has increased dramatically in recent years. The present study was designed to compare the performances of former/current record holders, after adjusting for differences in aerodynamic equipment and altitude. Additionally, we sought to determine the ideal elevation for future hour record attempts. The first step was constructing a mathematical model to predict power requirements of track cycling. The model was based on empirical data from wind-tunnel tests, the relationship of body size to frontal surface area, and field power measurements using a crank dynamometer (SRM). The model agreed reasonably well with actual measurements of power output on elite cyclists. Subsequently, the effects of altitude on maximal aerobic power were estimated from published research studies of elite athletes. This information was combined with the power requirement equation to predict what each cyclist's power output would have been at sea level. This allowed us to estimate the distance that each rider could have covered using state-of-the-art equipment at sea level. According to these calculations, when racing under equivalent conditions, Rominger would be first, Boardman second, Merckx third, and Indurain fourth. In addition, about 60% of the increase in hour record distances since Bracke's record (1967) have come from advances in technology and 40% from physiological improvements. To break the current world hour record, field measurements and the model indicate that a cyclist would have to deliver over 440 W for 1 h at sea level, or correspondingly less at altitude. The optimal elevation for future hour record attempts is predicted to be about 2500 m for acclimatized riders and 2000 m for unacclimatized riders.

  11. Comparative immunological evaluation of recombinant Salmonella Typhimurium strains expressing model antigens as live oral vaccines.

    Science.gov (United States)

    Zheng, Song-yue; Yu, Bin; Zhang, Ke; Chen, Min; Hua, Yan-Hong; Yuan, Shuofeng; Watt, Rory M; Zheng, Bo-Jian; Yuen, Kwok-Yung; Huang, Jian-Dong

    2012-09-26

    Despite the development of various systems to generate live recombinant Salmonella Typhimurium vaccine strains, little work has been performed to systematically evaluate and compare their relative immunogenicity. Such information would provide invaluable guidance for the future rational design of live recombinant Salmonella oral vaccines. To compare vaccine strains encoded with different antigen delivery and expression strategies, a series of recombinant Salmonella Typhimurium strains were constructed that expressed either the enhanced green fluorescent protein (EGFP) or a fragment of the hemagglutinin (HA) protein from the H5N1 influenza virus, as model antigens. The antigens were expressed from the chromosome, from high or low-copy plasmids, or encoded on a eukaryotic expression plasmid. Antigens were targeted for expression in either the cytoplasm or the outer membrane. Combinations of strategies were employed to evaluate the efficacy of combined delivery/expression approaches. After investigating in vitro and in vivo antigen expression, growth and infection abilities; the immunogenicity of the constructed recombinant Salmonella strains was evaluated in mice. Using the soluble model antigen EGFP, our results indicated that vaccine strains with high and stable antigen expression exhibited high B cell responses, whilst eukaryotic expression or colonization with good construct stability was critical for T cell responses. For the insoluble model antigen HA, an outer membrane expression strategy induced better B cell and T cell responses than a cytoplasmic strategy. Most notably, the combination of two different expression strategies did not increase the immune response elicited. Through systematically evaluating and comparing the immunogenicity of the constructed recombinant Salmonella strains in mice, we identified their respective advantages and deleterious or synergistic effects. Different construction strategies were optimally-required for soluble versus

  12. Comparing offshore wind farm wake observed from satellite SAR and wake model results

    Science.gov (United States)

    Bay Hasager, Charlotte

    2014-05-01

    Offshore winds can be observed from satellite synthetic aperture radar (SAR). In the FP7 EERA DTOC project, the European Energy Research Alliance project on Design Tools for Offshore Wind Farm Clusters, there is focus on mid- to far-field wind farm wakes. The more wind farms are constructed nearby other wind farms, the more is the potential loss in annual energy production in all neighboring wind farms due to wind farm cluster effects. It is of course dependent upon the prevailing wind directions and wind speed levels, the distance between the wind farms, the wind turbine sizes and spacing. Some knowledge is available within wind farm arrays and in the near-field from various investigations. There are 58 offshore wind farms in the Northern European seas grid connected and in operation. Several of those are spaced near each other. There are several twin wind farms in operation including Nysted-1 and Rødsand-2 in the Baltic Sea, and Horns Rev 1 and Horns Rev 2, Egmond aan Zee and Prinses Amalia, and Thompton 1 and Thompton 2 all in the North Sea. There are ambitious plans of constructing numerous wind farms - great clusters of offshore wind farms. Current investigation of offshore wind farms includes mapping from high-resolution satellite SAR of several of the offshore wind farms in operation in the North Sea. Around 20 images with wind farm wake cases have been retrieved and processed. The data are from the Canadian RADARSAT-1/-2 satellites. These observe in microwave C-band and have been used for ocean surface wind retrieval during several years. The satellite wind maps are valid at 10 m above sea level. The wakes are identified in the raw images as darker areas downwind of the wind farms. In the SAR-based wind maps the wake deficit is found as areas of lower winds downwind of the wind farms compared to parallel undisturbed flow in the flow direction. The wind direction is clearly visible from lee effects and wind streaks in the images. The wind farm wake cases

  13. Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation

    Science.gov (United States)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2015-04-01

    The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters

  14. Comparing the Performance of Commonly Available Digital Elevation Models in GIS-based Flood Simulation

    Science.gov (United States)

    Ybanez, R. L.; Lagmay, A. M. A.; David, C. P.

    2016-12-01

    With climatological hazards increasing globally, the Philippines is listed as one of the most vulnerable countries in the world due to its location in the Western Pacific. Flood hazards mapping and modelling is one of the responses by local government and research institutions to help prepare for and mitigate the effects of flood hazards that constantly threaten towns and cities in floodplains during the 6-month rainy season. Available digital elevation maps, which serve as the most important dataset used in 2D flood modelling, are limited in the Philippines and testing is needed to determine which of the few would work best for flood hazards mapping and modelling. Two-dimensional GIS-based flood modelling with the flood-routing software FLO-2D was conducted using three different available DEMs from the ASTER GDEM, the SRTM GDEM, and the locally available IfSAR DTM. All other parameters kept uniform, such as resolution, soil parameters, rainfall amount, and surface roughness, the three models were run over a 129-sq. kilometer watershed with only the basemap varying. The output flood hazard maps were compared on the basis of their flood distribution, extent, and depth. The ASTER and SRTM GDEMs contained too much error and noise which manifested as dissipated and dissolved hazard areas in the lower watershed where clearly delineated flood hazards should be present. Noise on the two datasets are clearly visible as erratic mounds in the floodplain. The dataset which produced the only feasible flood hazard map is the IfSAR DTM which delineates flood hazard areas clearly and properly. Despite the use of ASTER and SRTM with their published resolution and accuracy, their use in GIS-based flood modelling would be unreliable. Although not as accessible, only IfSAR or better datasets should be used for creating secondary products from these base DEM datasets. For developing countries which are most prone to hazards, but with limited choices for basemaps used in hazards

  15. Comparative analysis of national and regional models of the silver economy in the European Union

    Directory of Open Access Journals (Sweden)

    Andrzej Klimczuk

    2016-08-01

    Full Text Available The approach to analysing population ageing and its impacts on the economy has evolved in recent years. There is increasing interest in the development and use of products and services related to gerontechnology as well as other social innovations that may be considered as central parts of the "silver economy." However, the concept of silver economy is still being formed and requires detailed research. This article proposes a typology of models of the silver economy in the European Union (EU at the national and regional levels. This typology was created by comparing the Active Ageing Index to the typology of varieties and cultures of capitalism and typology of the welfare states. Practical recommendations for institutions of the EU and directions for further research are discussed.

  16. Canadian and United States regulatory models compared: doses from atmospheric pathways

    International Nuclear Information System (INIS)

    Peterson, S-R.

    1997-01-01

    CANDU reactors sold offshore are licensed primarily to satisfy Canadian Regulations. For radioactive emissions during normal operation, the Canadian Standards Association's CAN/CSA-N288.1-M87 is used. This standard provides guidelines and methodologies for calculating a rate of radionuclide release that exposes a member of the public to the annual dose limit. To calculate doses from air concentrations, either CSA-N288.1 or the Regulatory Guide 1.109 of the United States Nuclear Regulatory Commission, which has already been used to license light-water reactors in these countries, may be used. When dose predictions from CSA-N288.1 are compared with those from the U.S. Regulatory Guides, the differences in projected doses raise questions about the predictions. This report explains differences between the two models for ingestion, inhalation, external and immersion doses

  17. Molecular modeling studies of structural properties of polyvinyl alcohol: a comparative study using INTERFACE force field.

    Science.gov (United States)

    Radosinski, Lukasz; Labus, Karolina

    2017-10-05

    Polyvinyl alcohol (PVA) is a material with a variety of applications in separation, biotechnology, and biomedicine. Using combined Monte Carlo and molecular dynamics techniques, we present an extensive comparative study of second- and third-generation force fields Universal, COMPASS, COMPASS II, PCFF, and the newly developed INTERFACE, as applied to this system. In particular, we show that an INTERFACE force field provides a possibility of composing a reliable atomistic model to reproduce density change of PVA matrix in a narrow temperature range (298-348 K) and calculate a thermal expansion coefficient with reasonable accuracy. Thus, the INTERFACE force field may be used to predict mechanical properties of the PVA system, being a scaffold for hydrogels, with much greater accuracy than latter approaches. Graphical abstract Molecular Dynamics and Monte Carlo studies indicate that it is possible to predict properties of the PVA in narrow temperature range by using the INTERFACE force field.

  18. COMPARATIVE INTERNATIONAL PERSPECTIVES ON MARKET-ORIENTED MODELS OF CORPORATE GOVERNANCE

    Directory of Open Access Journals (Sweden)

    Balaciu Diana

    2010-07-01

    Full Text Available The study of corporate governance requires not only the knowledge of economic, financial, managerial and sociological mechanisms and norms, but it must also incorporate an ethical dimension, while remaining aware of the demands of various stakeholders. The interest towards good governance practice is very present in the company laws of many countries. National differences may lead to specific attributes derived from the meaning that is given to the role of competition and market dispersion of capital. Based on a research consisting of a critical and comparative perspective, the present contribution is dominated by qualitative and mixed methods. In conclusion, it can be said that a market-oriented corporate governance model, though not part of the European Union’s convergence process, may very well respond to the increasing importance of investors’ rights and to the gradual evolution of corporate responsibilities, beyond the national context, with the aim of ensuring market liberalization.

  19. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches.

    Science.gov (United States)

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.

  20. Models for comparing lung-cancer risks in radon- and plutonium-exposed experimental animals

    International Nuclear Information System (INIS)

    Gilbert, E.S.; Cross, F.T.; Sanders, C.L.; Dagle, G.E.

    1990-10-01

    Epidemiologic studies of radon-exposed underground miners have provided the primary basis for estimating human lung-cancer risks resulting from radon exposure. These studies are sometimes used to estimate lung-cancer risks resulting from exposure to other alpha- emitters as well. The latter use, often referred to as the dosimetric approach, is based on the assumption that a specified dose to the lung produces the same lung-tumor risk regardless of the substance producing the dose. At Pacific Northwest Laboratory, experiments have been conducted in which laboratory rodents have been given inhalation exposures to radon and to plutonium ( 239 PuO 2 ). These experiments offer a unique opportunity to compare risks, and thus to investigate the validity of the dosimetric approach. This comparison is made most effectively by modeling the age-specific risk as a function of dose in a way that is comparable to analyses of human data. Such modeling requires assumptions about whether tumors are the cause of death or whether they are found incidental to death from other causes. Results based on the assumption that tumors are fatal indicate that the radon and plutonium dose-response curves differ, with a linear function providing a good description of the radon data, and a pure quadratic function providing a good description of the plutonium data. However, results based on the assumption that tumors are incidental to death indicate that the dose-response curves for the two exposures are very similar, and thus support the dosimetric approach. 14 refs., 2 figs., 6 tabs

  1. Does PEEK/HA Enhance Bone Formation Compared With PEEK in a Sheep Cervical Fusion Model?

    Science.gov (United States)

    Walsh, William R; Pelletier, Matthew H; Bertollo, Nicky; Christou, Chris; Tan, Chris

    2016-11-01

    Polyetheretherketone (PEEK) has a wide range of clinical applications but does not directly bond to bone. Bulk incorporation of osteoconductive materials including hydroxyapatite (HA) into the PEEK matrix is a potential solution to address the formation of a fibrous tissue layer between PEEK and bone and has not been tested. Using in vivo ovine animal models, we asked: (1) Does PEEK-HA improve cortical and cancellous bone ongrowth compared with PEEK? (2) Does PEEK-HA improve bone ongrowth and fusion outcome in a more challenging functional ovine cervical fusion model? The in vivo responses of PEEK-HA Enhanced and PEEK-OPTIMA ® Natural were evaluated for bone ongrowth in the form of dowels implanted in the cancellous and cortical bone of adult sheep and examined at 4 and 12 weeks as well as interbody cervical fusion at 6, 12, and 26 weeks. The bone-implant interface was evaluated with radiographic and histologic endpoints for a qualitative assessment of direct bone contact of an intervening fibrous tissue later. Gamma-irradiated cortical allograft cages were evaluated as well. Incorporating HA into the PEEK matrix resulted in more direct bone apposition as opposed to the fibrous tissue interface with PEEK alone in the bone ongrowth as well as interbody cervical fusions. No adverse reactions were found at the implant-bone interface for either material. Radiography and histology revealed resorption and fracture of the allograft devices in vivo. Incorporating HA into PEEK provides a more favorable environment than PEEK alone for bone ongrowth. Cervical fusion was improved with PEEK-HA compared with PEEK alone as well as allograft bone interbody devices. Improving the bone-implant interface with a PEEK device by incorporating HA may improve interbody fusion results and requires further clinical studies.

  2. Comparing the radiosensitivity of cervical and thoracic spinal cord using the relative seriality model

    International Nuclear Information System (INIS)

    Adamus-Gorka, M.; Lind, B.K.; Brahme, A.

    2003-01-01

    Spinal cord is one of the most important normal tissues that are aimed to be spared during radiation therapy of cancer. This organ has been known for its strongly serial character and its high sensitivity to radiation. In order to compare the sensitivity of different parts of spinal cord, the early data (1970's) for radiation myelopathy available in the literature could be used. In the present study the relative seriality model (Kallman et al. 1992) has been fitted to two different sets of clinical data for spinal cord irradiation: radiation myelitis of cervical spinal cord after treating 248 patients for malignant disease of head and neck (Abbatucci et al. 1978) and radiation myelitis of thoracic spinal cord after radiation treating 43 patients with lung carcinoma (Reinhold et al. 1976). The maximum likelihood method was applied for the fitting and the corresponding parameters together with their 68% confidence intervals calculated for each of the datasets respectively. The alpha-beta ratio for the thoracic survival was also obtained. On the basis of the present study the following conclusions can be drawn: 1. radiation myelopathy is a strongly serial endpoint, 2. it appears to be differences in radiosensitivity between the cervical and thoracic region of spinal cord, 3. thoracic spinal cord revealed very serial characteristic of dose response, while the cervical myelopathy seems to be a bit less serial endpoint, 4. the dose-response curve is much steeper in case of myelopathy of cervical spinal cord, due to the much higher gamma value for this region. This work compares the fitting of NTCP model to the cervical and thoracic regions of the spinal cord and shows quite different responses. In the future more data should be tested for better understanding the mechanism of spinal cord sensitivity to radiation

  3. Culture-related service expectations: a comparative study using the Kano model.

    Science.gov (United States)

    Hejaili, Fayez F; Assad, Lina; Shaheen, Faissal A; Moussa, Dujana H; Karkar, Ayman; AlRukhaimi, Mona; Barhamein, Majdah; Al Suwida, Abdulkareem; Al Alhejaili, Faris F; Al Harbi, Ali S; Al Homrany, Mohamed; Attar, Bisher; Al-Sayyari, Abdulla A

    2009-01-01

    To compare service expectations between Arab and Austrian patients. We used a Kano model-based questionnaire with 20 service attributes of relevance to the dialysis patient. We analyzed 530, 172, 60, and 68 responses from Saudi, Austrian, Syrian, and UAE patients, respectively. We compared the customer satisfaction coefficient and the frequencies of response categories ("must be," "attractive," "one-dimensional," and "indifferent") for each of the 20 service attributes and in each of the 3 national groups of patients. We also investigated whether any differences seen were related to sex, age, literacy rate, or duration on dialysis. We observed higher satisfaction coefficients and "one-directional" responses among Arab patients and higher dissatisfaction coefficients and "must be" and "attractive" responses among Austrian patients. These were not related to age or duration on dialysis but were related to literacy rate. We speculate that these discrepancies between Austrian and Arab patients might be related to underdeveloped sophistication in market competitive forces and to cultural influences.

  4. Hepatic differentiation of human iPSCs in different 3D models: A comparative study.

    Science.gov (United States)

    Meier, Florian; Freyer, Nora; Brzeszczynska, Joanna; Knöspel, Fanny; Armstrong, Lyle; Lako, Majlinda; Greuel, Selina; Damm, Georg; Ludwig-Schwellinger, Eva; Deschl, Ulrich; Ross, James A; Beilmann, Mario; Zeilinger, Katrin

    2017-12-01

    Human induced pluripotent stem cells (hiPSCs) are a promising source from which to derive distinct somatic cell types for in vitro or clinical use. Existent protocols for hepatic differentiation of hiPSCs are primarily based on 2D cultivation of the cells. In the present study, the authors investigated the generation of hiPSC-derived hepatocyte-like cells using two different 3D culture systems: A 3D scaffold-free microspheroid culture system and a 3D hollow-fiber perfusion bioreactor. The differentiation outcome in these 3D systems was compared with that in conventional 2D cultures, using primary human hepatocytes as a control. The evaluation was made based on specific mRNA expression, protein secretion, antigen expression and metabolic activity. The expression of α-fetoprotein was lower, while cytochrome P450 1A2 or 3A4 activities were higher in the 3D culture systems as compared with the 2D differentiation system. Cells differentiated in the 3D bioreactor showed an increased expression of albumin and hepatocyte nuclear factor 4α, as well as secretion of α-1-antitrypsin as compared with the 2D differentiation system, suggesting a higher degree of maturation. In contrast, the 3D scaffold-free microspheroid culture provides an easy and robust method to generate spheroids of a defined size for screening applications, while the bioreactor culture model provides an instrument for complex investigations under physiological-like conditions. In conclusion, the present study introduces two 3D culture systems for stem cell derived hepatic differentiation each demonstrating advantages for individual applications as well as benefits in comparison with 2D cultures.

  5. Forecasting production of fossil fuel sources in Turkey using a comparative regression and ARIMA model

    International Nuclear Information System (INIS)

    Ediger, Volkan S.; Akar, Sertac; Ugurlu, Berkin

    2006-01-01

    This study aims at forecasting the most possible curve for domestic fossil fuel production of Turkey to help policy makers to develop policy implications for rapidly growing dependency problem on imported fossil fuels. The fossil fuel dependency problem is international in scope and context and Turkey is a typical example for emerging energy markets of the developing world. We developed a decision support system for forecasting fossil fuel production by applying a regression, ARIMA and SARIMA method to the historical data from 1950 to 2003 in a comparative manner. The method integrates each model by using some decision parameters related to goodness-of-fit and confidence interval, behavior of the curve, and reserves. Different forecasting models are proposed for different fossil fuel types. The best result is obtained for oil since the reserve classifications used it is much better defined them for the others. Our findings show that the fossil fuel production peak has already been reached; indicating the total fossil fuel production of the country will diminish and theoretically will end in 2038. However, production is expected to end in 2019 for hard coal, in 2024 for natural gas, in 2029 for oil and 2031 for asphaltite. The gap between the fossil fuel consumption and production is growing enormously and it reaches in 2030 to approximately twice of what it is in 2000

  6. Comparing models of borderline personality disorder: Mothers' experience, self-protective strategies, and dispositional representations.

    Science.gov (United States)

    Crittenden, Patricia M; Newman, Louise

    2010-07-01

    This study compared aspects of the functioning of mothers with borderline personality disorder (BPD) to those of mothers without psychiatric disorder using two different conceptualizations of attachment theory. The Adult Attachment Interviews (AAIs) of 32 mothers were classified using both the Main and Goldwyn method (M&G) and the Dynamic-Maturational Model method (DMM). We found that mothers with BPD recalled more danger, reported more negative effects of danger, and gave evidence of more unresolved psychological trauma tied to danger than other mothers. We also found that the DMM classifications discriminated between the two groups of mothers better than the M&G classifications. Using the DMM method, the AAIs of BPD mothers were more complex, extreme, and had more indicators of rapid shifts in arousal than those of other mothers. Representations drawn from the AAI, using either classificatory method, did not match the representations of the mother's child drawn from the Working Model of the Child Interview; mothers with very anxious DMM classifications were paired with secure-balanced child representations. We propose that the DMM offers greater clinical utility, conceptual coherence, empirical validity, and coder reliability than the M&G.

  7. Comparative Study of Lectin Domains in Model Species: New Insights into Evolutionary Dynamics

    Directory of Open Access Journals (Sweden)

    Sofie Van Holle

    2017-05-01

    Full Text Available Lectins are present throughout the plant kingdom and are reported to be involved in diverse biological processes. In this study, we provide a comparative analysis of the lectin families from model species in a phylogenetic framework. The analysis focuses on the different plant lectin domains identified in five representative core angiosperm genomes (Arabidopsis thaliana, Glycine max, Cucumis sativus, Oryza sativa ssp. japonica and Oryza sativa ssp. indica. The genomes were screened for genes encoding lectin domains using a combination of Basic Local Alignment Search Tool (BLAST, hidden Markov models, and InterProScan analysis. Additionally, phylogenetic relationships were investigated by constructing maximum likelihood phylogenetic trees. The results demonstrate that the majority of the lectin families are present in each of the species under study. Domain organization analysis showed that most identified proteins are multi-domain proteins, owing to the modular rearrangement of protein domains during evolution. Most of these multi-domain proteins are widespread, while others display a lineage-specific distribution. Furthermore, the phylogenetic analyses reveal that some lectin families evolved to be similar to the phylogeny of the plant species, while others share a closer evolutionary history based on the corresponding protein domain architecture. Our results yield insights into the evolutionary relationships and functional divergence of plant lectins.

  8. Comparative modeling of biological nutrient removal from landfill leachate using a circulating fluidized bed bioreactor (CFBBR).

    Science.gov (United States)

    Eldyasti, Ahmed; Andalib, Mehran; Hafez, Hisham; Nakhla, George; Zhu, Jesse

    2011-03-15

    Steady state operational data from a pilot scale circulating fluidized bed bioreactor (CFBBR) during biological treatment of landfill leachate, at empty bed contact times (EBCTs) of 0.49, and 0.41 d and volumetric nutrients loading rates of 2.2-2.6 kg COD/(m(3)d), 0.7-0.8 kg N/(m(3)d), and 0.014-0.016 kg P/(m(3)d), was used to calibrate and compare developed process models in BioWin(®) and AQUIFAS(®). BioWin(®) and AQUIFAS(®) were both capable of predicting most of the performance parameters such as effluent TKN, NH(4)-N, NO(3)-N, TP, PO(4)-P, TSS, and VSS with an average percentage error (APE) of 0-20%. BioWin(®) underpredicted the effluent BOD and SBOD values for various runs by 80% while AQUIFAS(®) predicted effluent BOD and SBOD with an APE of 50%. Although both calibrated models, confirmed the advantages of the CFBBR technology in treating the leachate of high volumetric loading and low biomass yields due to the long solid retention time (SRT), both BioWin(®) and AQUIFAS(®) predicted the total biomass and SRT of CFBBR based on active biomass only, whereas in the CFBBR runs both active as well as inactive biomass accumulated. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Comparative evaluation of two models of UPQC for suitable interface to enhance power quality

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Malabika [Department of Electrical Engineering, Dublin Institute of Technology, Kevin Street, Dublin 8 (Ireland); Das, Shyama P.; Dubey, Gopal K. [Department of Electrical Engineering, Indian Institute of Technology, Kanpur (India)

    2007-05-15

    Majority of the dispersed generations from renewable energy sources are connected to the grid through power electronic interface, which introduce additional harmonics in the distribution systems. Research is being carried out to integrate active filtering with specific interface such that a common power quality (PQ) platform could be achieved. For generalized solution, a unified power quality conditioner (UPQC) could be the most comprehensive PQ protecting device for sensitive non-linear loads, which require quality input supply. Also, load current harmonic isolation needs to be ensured for maintaining the quality of the supply current. The present paper describes two control scheme models for UPQC, for enhancing PQ of sensitive non-linear loads. Based on two different kinds of voltage compensation strategy, two control schemes have been designed, which are termed as UPQC-Q and UPQC-P. A comparative loading analysis has developed useful insight in finding the typical application of the two different control schemes. The effectiveness of the two control schemes is verified through extensive simulation using the software SABER. As the power circuit configuration of UPQC remains same for both the model, with modification of control scheme only, the utility of UPQC can be optimized depending upon the application requirement. (author)

  10. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  11. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Charalambos Koukouvaos

    2014-01-01

    Full Text Available Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  12. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.

  13. A Comparative analysis for control rod drop accident in RETRAN DNB and CETOP DNB Model

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Ha, Sang Jun

    2009-01-01

    In Korea, the nuclear industries such as fuel manufacturer, the architect engineer and the utility, have been using the methodologies and codes of vendors, such as Westinghouse(WH), Combustion Engineering, for the safety analyses of nuclear power plants. Consequently the industries have kept up the many organizations to operate the methodologies and to maintain the codes for each vendor. It may occur difficulty to improve the safety analyses efficiency and technology related. So, the necessity another of methodologies and code systems applicable to Non- LOCA, beyond design basis accident and performance analyses for all types of pressurized water reactor(PWR) has been raised. Due to the above reason, the Korea Electric Power Research Institute(KEPRI) had decided to develop the new safety analysis code system for Korea Standard Nuclear Power Plants in Korea. As the first requirement, the best-estimate codes were required for applicable wider application area and realistic behavior prediction of power plants with various and sophisticated functions. After the investigation for few candidates, RETRAN-3D has been chosen as a system analysis code. As a part of the feasibility estimation for the methodology and code system, CRD(Control Rod Drop) accident which an event of Non-LOCA accidents for Uljin units 3 and 4 and Yonggwang 1 and 2 was selected to verify the feasibility of the methodology using the RETRAN-3D. In this paper, RETRAN DNB Model and CETOP DNB Model were analyzed by using comparative method

  14. The highs and lows of cloud radiative feedback: Comparing observational data and CMIP5 models

    Science.gov (United States)

    Jenney, A.; Randall, D. A.

    2014-12-01

    Clouds play a complex role in the climate system, and remain one of the more difficult aspects of the future climate to predict. Over subtropical eastern ocean basins, particularly next to California, Peru, and Southwest Africa, low marine stratocumulus clouds (MSC) help to reduce the amount of solar radiation that reaches the surface by reflecting incident sunlight. The climate feedback associated with these clouds is thought to be positive. This project looks at CMIP5 models and compares them to observational data from CERES and ERA-Interim to try and find observational evidence and model agreement for low, marine stratocumulus cloud feedback. Although current evidence suggests that the low cloud feedback is positive (IPCC, 2014), an analysis of the simulated relationship between July lower tropospheric stability (LTS) and shortwave cloud forcing in MSC regions suggests that this feedback is not due to changes in LTS. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp.

  15. A Comparative Study on Satellite- and Model-Based Crop Phenology in West Africa

    Directory of Open Access Journals (Sweden)

    Elodie Vintrou

    2014-02-01

    Full Text Available Crop phenology is essential for evaluating crop production in the food insecure regions of West Africa. The aim of the paper is to study whether satellite observation of plant phenology are consistent with ground knowledge of crop cycles as expressed in agro-simulations. We used phenological variables from a MODIS Land Cover Dynamics (MCD12Q2 product and examined whether they reproduced the spatio-temporal variability of crop phenological stages in Southern Mali. Furthermore, a validated cereal crop growth model for this region, SARRA-H (System for Regional Analysis of Agro-Climatic Risks, provided precise agronomic information. Remotely-sensed green-up, maturity, senescence and dormancy MODIS dates were extracted for areas previously identified as crops and were compared with simulated leaf area indices (LAI temporal profiles generated using the SARRA-H crop model, which considered the main cropping practices. We studied both spatial (eight sites throughout South Mali during 2007 and temporal (two sites from 2002 to 2008 differences between simulated crop cycles and determined how the differences were indicated in satellite-derived phenometrics. The spatial comparison of the phenological indicator observations and simulations showed mainly that (i the satellite-derived start-of-season (SOS was detected approximately 30 days before the model-derived SOS; and (ii the satellite-derived end-of-season (EOS was typically detected 40 days after the model-derived EOS. Studying the inter-annual difference, we verified that the mean bias was globally consistent for different climatic conditions. Therefore, the land cover dynamics derived from the MODIS time series can reproduce the spatial and temporal variability of different start-of-season and end-of-season crop species. In particular, we recommend simultaneously using start-of-season phenometrics with crop models for yield forecasting to complement commonly used climate data and provide a better

  16. Comparative investigation of ELM control based on toroidal modelling of plasma response to RMP fields

    Science.gov (United States)

    Liu, Yueqiang

    2016-10-01

    The type-I edge localized mode (ELM), bursting at low frequency and with large amplitude, can channel a substantial amount of the plasma thermal energy into the surrounding plasma-facing components in tokamak devices operating at the high-confinement mode, potentially causing severe material damages. Learning effective ways of controlling this instability is thus an urgent issue in fusion research, in particular in view of the next generation large devices such as ITER and DEMO. Among other means, externally applied, three-dimensional resonant magnetic perturbation (RMP) fields have been experimentally demonstrated to be successful in mitigating or suppressing the type-I ELM, in multiple existing devices. In this work, we shall report results of a comparative study of ELM control using RMPs. Comparison is made between the modelled plasma response to the 3D external fields and the observed change of the ELM behaviour on multiple devices, including MAST, ASDEX Upgrade, EAST, DIII-D, JET, and KSTAR. We show that toroidal modelling of the plasma response, based on linear and quasi-linear magnetohydrodynamic (MHD) models, provides essential insights that are useful in interpreting and guiding the ELM control experiments. In particular, linear toroidal modelling results, using the MARS-F code, reveal the crucial role of the edge localized peeling-tearing mode response during ELM mitigation/suppression on all these devices. Such response often leads to strong peaking of the plasma surface displacement near the region of weak equilibrium poloidal field (e.g. the X-point), and this provides an alternative practical criterion for ELM control, as opposed to the vacuum field based Chirikov criteria. Quasi-linear modelling using MARS-Q provides quantitative interpretation of the side effects due to the ELM control coils, on the plasma toroidal momentum and particle confinements. The particular role of the momentum and particle fluxes, associated with the neoclassical toroidal

  17. Five Blind Men and an Elephant: Comparing Aura Ozone Datasets and Sonde with Model Simulations

    Science.gov (United States)

    Tang, Q.; Prather, M. J.

    2011-12-01

    The four Earth Observing System (EOS) Aura satellite ozone measurements (HIRDLS, MLS, OMI, and TES) as well as the coincident WOUDC sonde are the five ``blind men'' touching the ``elephant'' (ozone). They all measure ozone (O3) in the upper troposphere and lower stratosphere (UT/LS) region, providing the great opportunity to study how the tropospheric ozone is influenced by the stratospheric source, an important tropospheric ozone budget term with large uncertainties and discrepancies across different models and methods. Based upon the 2-D autocorrelation for the tropospheric column ozone anomalies of the OMI swaths, we show that the stratosphere-troposphere exchange (STE) processes occur on the scale of a few hundred kilometers. Applying the high resolution (1o±1o±40-layer±0.5 hr) atmospheric chemistry transport model (CTM) as a transfer standard, we compare the noncoincident Aura level 2 swath datasets with the exact matching simulations of each measurement to investigate the consistency of different instruments as well as evaluate the accuracy of modeled ozone. Different signs of the CTM biases against HIRDLS, MLS, and TES are found from tropics to northern hemisphere (NH) mid-latitudes in July 2005 at 215 hPa and over tropics at 147 hPa for July 2005 and January 2006, suggesting inconsistency across these Aura datasets. On the other hand, the CTM has great positive biases against satellite observations in the lower stratosphere of winter time southern hemisphere (SH) mid-latitudes, which is probably attributed to the problems in the stratospheric circulation of the driving met-fields. The model's ability of reproducing STE-related processes, such as tropospheric folds (TFs), is confirmed by the comparisons with WOUDC sonde. We found eight cases in year 2005 with all the four Aura measurements available and folding structures in the coincident sonde profile. The case studies indicate that all the four Aura instruments demonstrate some skills in catching the

  18. THE FLIPPED WRITING CLASSROOM IN TURKISH EFL CONTEXT: A COMPARATIVE STUDY ON A NEW MODEL

    Directory of Open Access Journals (Sweden)

    Emrah EKMEKCI

    2017-04-01

    Full Text Available Flipped learning, one of the most popular and conspicuous instructional models of recent time, can be considered as a pedagogical approach in which the typical lecture and homework elements of a course are reversed. Flipped learning transforms classrooms into interactive and dynamic places where the teacher guides the students and facilitates their learning. The current study explores the impact of flipped instruction on students’ foreign language writing skill which is often perceived as boring, complex and difficult by English as a Foreign Language (EFL learners. The study compares flipped and traditional face-to-face writing classes on the basis of writing performances. Employing a pre- and post-test true experimental design with a control group, the study is based on a mixed-method research. The experimental group consisting of 23 English Language Teaching (ELT students attending preparatory class were instructed for fifteen weeks through Flipped Writing Class Model while the control group comprising 20 ELT preparatory class students followed traditional face-to-face lecture-based writing class. Independent and paired samples t-tests were carried out for the analyses of the data gathered through the pre-and post-tests. The results indicated that there was a statistically significant difference between the experimental and control groups in terms of their writing performances based on the employed rubric. It was found that the students in the experimental group outperformed the students in the control group after the treatment process. The results of the study also revealed that the great majority of the students in the experimental group held positive attitudes towards Flipped Writing Class Model.

  19. Soil surface roughness: comparing old and new measuring methods and application in a soil erosion model

    Science.gov (United States)

    Thomsen, L. M.; Baartman, J. E. M.; Barneveld, R. J.; Starkloff, T.; Stolte, J.

    2015-04-01

    Quantification of soil roughness, i.e. the irregularities of the soil surface due to soil texture, aggregates, rock fragments and land management, is important as it affects surface storage, infiltration, overland flow, and ultimately sediment detachment and erosion. Roughness has been measured in the field using both contact methods (such as roller chain and pinboard) and sensor methods (such as stereophotogrammetry and terrestrial laser scanning (TLS)). A novel depth-sensing technique, originating in the gaming industry, has recently become available for earth sciences: the Xtion Pro method. Roughness data obtained using various methods are assumed to be similar; this assumption is tested in this study by comparing five different methods to measure roughness in the field on 1 m2 agricultural plots with different management (ploughing, harrowing, forest and direct seeding on stubble) in southern Norway. Subsequently, the values were used as input for the LISEM soil erosion model to test their effect on the simulated hydrograph at catchment scale. Results show that statistically significant differences between the methods were obtained only for the fields with direct seeding on stubble; for the other land management types the methods were in agreement. The spatial resolution of the contact methods was much lower than for the sensor methods (10 000 versus at least 57 000 points per square metre). In terms of costs and ease of use in the field, the Xtion Pro method is promising. Results from the LISEM model indicate that especially the roller chain overestimated the random roughness (RR) values and the model subsequently calculated less surface runoff than measured. In conclusion, the choice of measurement method for roughness data matters and depends on the required accuracy, resolution, mobility in the field and available budget. It is recommended to use only one method within one study.

  20. Choosing algorithms for TB screening: a modelling study to compare yield, predictive value and diagnostic burden.

    Science.gov (United States)

    Van't Hoog, Anna H; Onozaki, Ikushi; Lonnroth, Knut

    2014-10-19

    To inform the choice of an appropriate screening and diagnostic algorithm for tuberculosis (TB) screening initiatives in different epidemiological settings, we compare algorithms composed of currently available methods. Of twelve algorithms composed of screening for symptoms (prolonged cough or any TB symptom) and/or chest radiography abnormalities, and either sputum-smear microscopy (SSM) or Xpert MTB/RIF (XP) as confirmatory test we model algorithm outcomes and summarize the yield, number needed to screen (NNS) and positive predictive value (PPV) for different levels of TB prevalence. Screening for prolonged cough has low yield, 22% if confirmatory testing is by SSM and 32% if XP, and a high NNS, exceeding 1000 if TB prevalence is ≤0.5%. Due to low specificity the PPV of screening for any TB symptom followed by SSM is less than 50%, even if TB prevalence is 2%. CXR screening for TB abnormalities followed by XP has the highest case detection (87%) and lowest NNS, but is resource intensive. CXR as a second screen for symptom screen positives improves efficiency. The ideal algorithm does not exist. The choice will be setting specific, for which this study provides guidance. Generally an algorithm composed of CXR screening followed by confirmatory testing with XP can achieve the lowest NNS and highest PPV, and is the least amenable to setting-specific variation. However resource requirements for tests and equipment may be prohibitive in some settings and a reason to opt for symptom screening and SSM. To better inform disease control programs we need empirical data to confirm the modeled yield, cost-effectiveness studies, transmission models and a better screening test.

  1. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    Science.gov (United States)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)1232.0.CO;2.

  2. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  3. Atmospheric Dispersion Models for the Calculation of Environmental Impact: A Comparative Study

    International Nuclear Information System (INIS)

    Caputo, Marcelo; Gimenez, Marcelo; Felicelli, Sergio; Schlamp, Miguel

    2000-01-01

    In this paper some new comparisons are presented between the codes AERMOD, HPDM and HYSPLIT.The first two are Gaussian stationary plume codes and they were developed to calculate environmental impact produced by chemical contaminants.HYSPLIT is a hybrid code because it uses a Lagrangian reference system to describe the transport of a puff center of mass and uses an Eulerian system to describe the dispersion within the puff.The meteorological and topographic data used in the present work were obtained from runs of the prognostic code RAMS, provided by NOAA. The emission was fixed in 0.3 g/s , 284 K and 0 m/s .The surface rough was fixed in 0.1m and flat terrain was considered.In order to analyze separate effects and to go deeper in the comparison, the meteorological data was split into two, depending on the atmospheric stability class (F to B), and the wind direction was fixed to neglect its contribution to the contaminant dispersion.The main contribution of this work is to provide recommendations about the validity range of each code depending on the model used.In the case of Gaussian models the validity range is fixed by the distance in which the atmospheric condition can be consider homogeneous.In the other hand the validity range of HYSPLIT's model is determined by the spatial extension of the meteorological data.The results obtained with the three codes are comparable if the emission is in equilibrium with the environment.This means that the gases were emitted at the same temperature of the medium with zero velocity.There was an important difference between the dispersion parameters used by the Gaussian codes

  4. Comparing two models for post-wildfire debris flow susceptibility mapping

    Science.gov (United States)

    Cramer, J.; Bursik, M. I.; Legorreta Paulin, G.

    2017-12-01

    Traditionally, probabilistic post-fire debris flow susceptibility mapping has been performed based on the typical method of failure for debris flows/landslides, where slip occurs along a basal shear zone as a result of rainfall infiltration. Recent studies have argued that post-fire debris flows are fundamentally different in their method of initiation, which is not infiltration-driven, but surface runoff-driven. We test these competing models by comparing the accuracy of the susceptibility maps produced by each initiation method. Debris flow susceptibility maps are generated according to each initiation method for a mountainous region of Southern California that recently experienced wildfire and subsequent debris flows. A multiple logistic regression (MLR), which uses the occurrence of past debris flows and the values of environmental parameters, was used to determine the probability of future debris flow occurrence. The independent variables used in the MLR are dependent on the initiation method; for example, depth to slip plane, and shear strength of soil are relevant to the infiltration initiation, but not surface runoff. A post-fire debris flow inventory serves as the standard to compare the two susceptibility maps, and was generated by LiDAR analysis and field based ground-truthing. The amount of overlap between the true locations where debris flow erosion can be documented, and where the MLR predicts high probability of debris flow initiation was statistically quantified. The Figure of Merit in Space (FMS) was used to compare the two models, and the results of the FMS comparison suggest that surface runoff-driven initiation better explains debris flow occurrence. Wildfire can breed conditions that induce debris flows in areas that normally would not be prone to them. Because of this, nearby communities at risk may not be equipped to protect themselves against debris flows. In California, there are just a few months between wildland fire season and the wet

  5. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  6. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  7. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    Science.gov (United States)

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  8. A comparative study of the working memory multicomponent model in psychosis and healthy controls.

    Science.gov (United States)

    Sánchez-Torres, Ana M; Elosúa, M Rosa; Lorente-Omeñaca, Ruth; Moreno-Izco, Lucía; Cuesta, Manuel J

    2015-08-01

    Working memory deficits are considered nuclear deficits in psychotic disorders. However, research has not found a generalized impairment in all of the components of working memory. We aimed to assess the components of the Baddeley and Hitch working memory model: the temporary systems-the phonological loop, the visuospatial sketchpad and the episodic buffer (introduced later by Baddeley)-and the central executive system, which includes four executive functions: divided attention, updating, shifting and inhibition. We assessed working memory performance in a sample of 21 patients with a psychotic disorder and 21 healthy controls. Patients also underwent a clinical assessment. Both univariate and repeated measures ANOVAs were applied to analyze performance in the working memory components between groups. Patients with a psychotic disorder underperformed compared to the controls in all of the working memory tasks, but after controlling for age and premorbid IQ, we only found a difference in performance in the N-Back task. Repeated measures ANCOVAs showed that patients also underperformed compared to the controls in the Digit span test and the TMT task. Not all of the components of working memory were impaired in the patients. Specifically, patients' performance was impaired in the tasks selected to assess the phonological loop and the shifting executive function. Patients' also showed worse performance than controls in the N-Back task, representative of the updating executive function. However, we did not find higher impairment in the patients' performance respect to controls when increasing the loading of the task. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. BIRDS AS A MODEL TO STUDY ADULT NEUROGENESIS: BRIDGING EVOLUTIONARY, COMPARATIVE AND NEUROETHOLOGICAL APPROCHES

    Science.gov (United States)

    BARNEA, ANAT; PRAVOSUDOV, VLADIMIR

    2011-01-01

    During the last few decades evidence has demonstrated that adult neurogenesis is a well-preserved feature throughout the animal kingdom. In birds, ongoing neuronal addition occurs rather broadly, to a number of brain regions. This review describes adult avian neurogenesis and neuronal recruitment, discusses factors that regulate these processes, and touches upon the question of their genetic control. Several attributes make birds an extremely advantageous model to study neurogenesis. First, song learning exhibits seasonal variation that is associated with seasonal variation in neuronal turnover in some song control brain nuclei, which seems to be regulated via adult neurogenesis. Second, food-caching birds naturally use memory-dependent behavior in learning locations of thousands of food caches scattered over their home ranges. In comparison with other birds, food-caching species have relatively enlarged hippocampi with more neurons and intense neurogenesis, which appears to be related to spatial learning. Finally, migratory behavior and naturally occurring social systems in birds also provide opportunities to investigate neurogenesis. Such diversity of naturally-occurring memory-based behaviors, combined with the fact that birds can be studied both in the wild and in the laboratory, make them ideal for investigation of neural processes underlying learning. This can be done by using various approaches, from evolutionary and comparative to neuroethological and molecular. Finally, we connect the avian arena to a broader view by providing a brief comparative and evolutionary overview of adult neurogenesis and by discussing the possible functional role of the new neurons. We conclude by indicating future directions and possible medical applications. PMID:21929623

  10. Microwave Ablation Compared with Radiofrequency Ablation for Breast Tissue in an Ex Vivo Bovine Udder Model

    International Nuclear Information System (INIS)

    Tanaka, Toshihiro; Westphal, Saskia; Isfort, Peter; Braunschweig, Till; Penzkofer, Tobias; Bruners, Philipp; Kichikawa, Kimihiko; Schmitz-Rode, Thomas; Mahnken, Andreas H.

    2012-01-01

    Purpose: To compare the effectiveness of microwave (MW) ablation with radiofrequency (RF) ablation for treating breast tissue in a nonperfused ex vivo model of healthy bovine udder tissue. Materials and Methods: MW ablations were performed at power outputs of 25W, 35W, and 45W using a 915-MHz frequency generator and a 2-cm active tip antenna. RF ablations were performed with a bipolar RF system with 2- and 3-cm active tip electrodes. Tissue temperatures were continuously monitored during ablation. Results: The mean short-axis diameters of the coagulation zones were 1.34 ± 0.14, 1.45 ± 0.13, and 1.74 ± 0.11 cm for MW ablation at outputs of 25W, 35W, and 45W. For RF ablation, the corresponding values were 1.16 ± 0.09 and 1.26 ± 0.14 cm with electrodes having 2- and 3-cm active tips, respectively. The mean coagulation volumes were 2.27 ± 0.65, 2.85 ± 0.72, and 4.45 ± 0.47 cm 3 for MW ablation at outputs of 25W, 35W, and 45W and 1.18 ± 0.30 and 2.29 ± 0.55 cm 3 got RF ablation with 2- and 3-cm electrodes, respectively. MW ablations at 35W and 45W achieved significantly longer short-axis diameters than RF ablations (P < 0.05). The highest tissue temperature was achieved with MW ablation at 45W (P < 0.05). On histological examination, the extent of the ablation zone in MW ablations was less affected by tissue heterogeneity than that in RF ablations. Conclusion: MW ablation appears to be advantageous with respect to the volume of ablation and the shape of the margin of necrosis compared with RF ablation in an ex vivo bovine udder.

  11. Rosetta comparative modeling for library design: Engineering alternative inducer specificity in a transcription factor.

    Science.gov (United States)

    Jha, Ramesh K; Chakraborti, Subhendu; Kern, Theresa L; Fox, David T; Strauss, Charlie E M

    2015-07-01

    Structure-based rational mutagenesis for engineering protein functionality has been limited by the scarcity and difficulty of obtaining crystal structures of desired proteins. On the other hand, when high-throughput selection is possible, directed evolution-based approaches for gaining protein functionalities have been random and fortuitous with limited rationalization. We combine comparative modeling of dimer structures, ab initio loop reconstruction, and ligand docking to select positions for mutagenesis to create a library focused on the ligand-contacting residues. The rationally reduced library requirement enabled conservative control of the substitutions by oligonucleotide synthesis and bounding its size within practical transformation efficiencies (∼ 10(7) variants). This rational approach was successfully applied on an inducer-binding domain of an Acinetobacter transcription factor (TF), pobR, which shows high specificity for natural effector molecule, 4-hydroxy benzoate (4HB), but no native response to 3,4-dihydroxy benzoate (34DHB). Selection for mutants with high transcriptional induction by 34DHB was carried out at the single-cell level under flow cytometry (via green fluorescent protein expression under the control of pobR promoter). Critically, this selection protocol allows both selection for induction and rejection of constitutively active mutants. In addition to gain-of-function for 34DHB induction, the selected mutants also showed enhanced sensitivity and response for 4HB (native inducer) while no sensitivity was observed for a non-targeted but chemically similar molecule, 2-hydroxy benzoate (2HB). This is unique application of the Rosetta modeling protocols for library design to engineer a TF. Our approach extends applicability of the Rosetta redesign protocol into regimes without a priori precision structural information. © 2015 Wiley Periodicals, Inc.

  12. Comparative exploration of multidimensional flow cytometry software: a model approach evaluating T cell polyfunctional behavior.

    Science.gov (United States)

    Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E

    2017-08-01

    Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society

  13. Candidatus Sodalis melophagi sp. nov.: phylogenetically independent comparative model to the tsetse fly symbiont Sodalis glossinidius.

    Directory of Open Access Journals (Sweden)

    Tomáš Chrudimský

    Full Text Available Bacteria of the genus Sodalis live in symbiosis with various groups of insects. The best known member of this group, a secondary symbiont of tsetse flies Sodalis glossinidius, has become one of the most important models in investigating establishment and evolution of insect-bacteria symbiosis. It represents a bacterium in the early/intermediate state of the transition towards symbiosis, which allows for exploring such interesting topics as: usage of secretory systems for entering the host cell, tempo of the genome modification, and metabolic interaction with a coexisting primary symbiont. In this study, we describe a new Sodalis species which could provide a useful comparative model to the tsetse symbiont. It lives in association with Melophagus ovinus, an insect related to tsetse flies, and resembles S. glossinidius in several important traits. Similar to S. glossinidius, it cohabits the host with another symbiotic bacterium, the bacteriome-harbored primary symbiont of the genus Arsenophonus. As a typical secondary symbiont, Candidatus Sodalis melophagi infects various host tissues, including bacteriome. We provide basic morphological and molecular characteristics of the symbiont and show that these traits also correspond to the early/intermediate state of the evolution towards symbiosis. Particularly, we demonstrate the ability of the bacterium to live in insect cell culture as well as in cell-free medium. We also provide basic characteristics of type three secretion system and using three reference sequences (16 S rDNA, groEL and spaPQR region we show that the bacterium branched within the genus Sodalis, but originated independently of the two previously described symbionts of hippoboscoids. We propose the name Candidatus Sodalis melophagi for this new bacterium.

  14. Candidatus Sodalis melophagi sp. nov.: phylogenetically independent comparative model to the tsetse fly symbiont Sodalis glossinidius.

    Science.gov (United States)

    Chrudimský, Tomáš; Husník, Filip; Nováková, Eva; Hypša, Václav

    2012-01-01

    Bacteria of the genus Sodalis live in symbiosis with various groups of insects. The best known member of this group, a secondary symbiont of tsetse flies Sodalis glossinidius, has become one of the most important models in investigating establishment and evolution of insect-bacteria symbiosis. It represents a bacterium in the early/intermediate state of the transition towards symbiosis, which allows for exploring such interesting topics as: usage of secretory systems for entering the host cell, tempo of the genome modification, and metabolic interaction with a coexisting primary symbiont. In this study, we describe a new Sodalis species which could provide a useful comparative model to the tsetse symbiont. It lives in association with Melophagus ovinus, an insect related to tsetse flies, and resembles S. glossinidius in several important traits. Similar to S. glossinidius, it cohabits the host with another symbiotic bacterium, the bacteriome-harbored primary symbiont of the genus Arsenophonus. As a typical secondary symbiont, Candidatus Sodalis melophagi infects various host tissues, including bacteriome. We provide basic morphological and molecular characteristics of the symbiont and show that these traits also correspond to the early/intermediate state of the evolution towards symbiosis. Particularly, we demonstrate the ability of the bacterium to live in insect cell culture as well as in cell-free medium. We also provide basic characteristics of type three secretion system and using three reference sequences (16 S rDNA, groEL and spaPQR region) we show that the bacterium branched within the genus Sodalis, but originated independently of the two previously described symbionts of hippoboscoids. We propose the name Candidatus Sodalis melophagi for this new bacterium.

  15. Comparative in vitro and in vivo models of cytotoxicity and genotoxicity

    International Nuclear Information System (INIS)

    Brooks, A.L.; Mitchell, C.E.; Seiler, S.A.

    1986-01-01

    To understand the development of disease from inhalation of complex chemical mixtures, it is necessary to use both in vitro and whole animal systems. This project is designed to provide links between these two types of research. The project has three major goals. The first goal is to evaluate the mutagenic activity of complex mixtures and the interactions between different fractions in these mixtures. The second is to develop model cellular systems that help define the mechanisms of genotoxic damage and repair in the lung. The third goal is to understand the mechanisms involved in the induction of mutations and chromosome aberrations in mammalian cells. Research on the measurement and interactions of mutagens in complex mixtures is illustrated by reporting a study using diesel exhaust particle extracts. The extracts were fractionated into ten different chemical classes. Each of the fractions was tested for mutagenic activity in the Ames Salmonella mutation assay. Individual fractions were combined using different permutations. The total mixture was reconstituted and the mutagenic activity compared to the predicted level of activity. Mutagenic activity was additive indicating that the chemical fractionation did not alter the extracts and that there was little evidence of synergistic or antagonistic interaction. To help define the mechanisms involved in the induction of mutations, they have exposed CHO cells to radiation and mutagenic chemicals alone and in combination. In these studies, they have demonstrated that when cells were exposed to 500 rad of x-rays followed by either direct or indirect acting mutagens frequency was less than would be predicted by an additive model

  16. Population modelling to compare chronic external radiotoxicity between individual and population endpoints in four taxonomic groups.

    Science.gov (United States)

    Alonzo, Frédéric; Hertel-Aas, Turid; Real, Almudena; Lance, Emilie; Garcia-Sanchez, Laurent; Bradshaw, Clare; Vives I Batlle, Jordi; Oughton, Deborah H; Garnier-Laplace, Jacqueline

    2016-02-01

    In this study, we modelled population responses to chronic external gamma radiation in 12 laboratory species (including aquatic and soil invertebrates, fish and terrestrial mammals). Our aim was to compare radiosensitivity between individual and population endpoints and to examine how internationally proposed benchmarks for environmental radioprotection protected species against various risks at the population level. To do so, we used population matrix models, combining life history and chronic radiotoxicity data (derived from laboratory experiments and described in the literature and the FREDERICA database) to simulate changes in population endpoints (net reproductive rate R0, asymptotic population growth rate λ, equilibrium population size Neq) for a range of dose rates. Elasticity analyses of models showed that population responses differed depending on the affected individual endpoint (juvenile or adult survival, delay in maturity or reduction in fecundity), the considered population endpoint (R0, λ or Neq) and the life history of the studied species. Among population endpoints, net reproductive rate R0 showed the lowest EDR10 (effective dose rate inducing 10% effect) in all species, with values ranging from 26 μGy h(-1) in the mouse Mus musculus to 38,000 μGy h(-1) in the fish Oryzias latipes. For several species, EDR10 for population endpoints were lower than the lowest EDR10 for individual endpoints. Various population level risks, differing in severity for the population, were investigated. Population extinction (predicted when radiation effects caused population growth rate λ to decrease below 1, indicating that no population growth in the long term) was predicted for dose rates ranging from 2700 μGy h(-1) in fish to 12,000 μGy h(-1) in soil invertebrates. A milder risk, that population growth rate λ will be reduced by 10% of the reduction causing extinction, was predicted for dose rates ranging from 24 μGy h(-1) in mammals to 1800 μGy h(-1) in

  17. Water vapor measurements at ALOMAR over a solar cycle compared with model calculations by LIMA

    Science.gov (United States)

    Hartogh, P.; Sonnemann, G. R.; Grygalashvyly, M.; Song, Li; Berger, U.; Lübken, F.-J.

    2010-01-01

    Microwave water vapor measurements between 40 and 80 km altitude over a solar cycle (1996-2006) were carried out in high latitudes at Arctic Lidar Observatory for Middle Atmosphere Research (ALOMAR) (69.29°N, 16.03°E), Norway. Some smaller gaps and three interruptions of monitoring in the winters 1996/1997 and 2005/2006 and from spring 2001 to spring 2002 occurred during this period. The observations show a distinct year-to-year variability not directly related to solar Lyman-α radiation. In winter the water vapor mixing ratios in the upper domain were anticorrelated to the solar activity, whereas in summer, minima occurred in the years after the solar maximum in 2000/2001. In winter, sudden stratospheric warmings (SSWs) modulated the water vapor mixing ratios. Within the stratopause region a middle atmospheric water vapor maximum was observed, which results from the methane oxidation and is a regular feature there. The altitude of the maximum increased by approximately 5 km as summer approached. The largest mixing ratios were monitored in autumn. During the summer season a secondary water vapor maximum also occurred above 65 km most pronounced in late summer. The solar Lyman-α radiation impacts the water vapor mixing ratio particularly in winter above 65 km. In summer the correlation is positive below 70 km. The correlation is also positive in the lower mesosphere/stratopause region in winter due to the action of sudden stratospheric warmings, which occur more frequently under the condition of high solar activity and the enhancing the humidity. A strong day-to-day variability connected with planetary wave activity was found throughout the entire year. Model calculations by means of Leibniz-Institute Middle Atmosphere model (LIMA) reflect the essential patterns of the water vapor variation, but the results also show differences from the observations, indicating that exchange processes between the troposphere and stratosphere not modeled by LIMA could have

  18. Model-based analyses to compare health and economic outcomes of cancer control: inclusion of disparities.

    Science.gov (United States)

    Goldie, Sue J; Daniels, Norman

    2011-09-21

    Disease simulation models of the health and economic consequences of different prevention and treatment strategies can guide policy decisions about cancer control. However, models that also consider health disparities can identify strategies that improve both population health and its equitable distribution. We devised a typology of cancer disparities that considers types of inequalities among black, white, and Hispanic populations across different cancers and characteristics important for near-term policy discussions. We illustrated the typology in the specific example of cervical cancer using an existing disease simulation model calibrated to clinical, epidemiological, and cost data for the United States. We calculated average reduction in cancer incidence overall and for black, white, and Hispanic women under five different prevention strategies (Strategies A1, A2, A3, B, and C) and estimated average costs and life expectancy per woman, and the cost-effectiveness ratio for each strategy. Strategies that may provide greater aggregate health benefit than existing options may also exacerbate disparities. Combining human papillomavirus vaccination (Strategy A2) with current cervical cancer screening patterns (Strategy A1) resulted in an average reduction of 69% in cancer incidence overall but a 71.6% reduction for white women, 68.3% for black women, and 63.9% for Hispanic women. Other strategies targeting risk-based screening to racial and ethnic minorities reduced disparities among racial subgroups and resulted in more equitable distribution of benefits among subgroups (reduction in cervical cancer incidence, white vs. Hispanic women, 69.7% vs. 70.1%). Strategies that employ targeted risk-based screening and new screening algorithms, with or without vaccination (Strategies B and C), provide excellent value. The most effective strategy (Strategy C) had a cost-effectiveness ratio of $28,200 per year of life saved when compared with the same strategy without

  19. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  20. Prediction Model of Cutting Parameters for Turning High Strength Steel Grade-H: Comparative Study of Regression Model versus ANFIS

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2017-01-01

    Full Text Available The Grade-H high strength steel is used in the manufacturing of many civilian and military products. The procedures of manufacturing these parts have several turning operations. The key factors for the manufacturing of these parts are the accuracy, surface roughness (Ra, and material removal rate (MRR. The production line of these parts contains many CNC turning machines to get good accuracy and repeatability. The manufacturing engineer should fulfill the required surface roughness value according to the design drawing from first trail (otherwise these parts will be rejected as well as keeping his eye on maximum metal removal rate. The rejection of these parts at any processing stage will represent huge problems to any factory because the processing and raw material of these parts are very expensive. In this paper the artificial neural network was used for predicting the surface roughness for different cutting parameters in CNC turning operations. These parameters were investigated to get the minimum surface roughness. In addition, a mathematical model for surface roughness was obtained from the experimental data using a regression analysis method. The experimental data are then compared with both the regression analysis results and ANFIS (Adaptive Network-based Fuzzy Inference System estimations.

  1. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  2. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections.

    Science.gov (United States)

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2017-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Local participation in biodiversity conservation initiatives: a comparative analysis of different models in South East Mexico.

    Science.gov (United States)

    Méndez-López, María Elena; García-Frapolli, Eduardo; Pritchard, Diana J; Sánchez González, María Consuelo; Ruiz-Mallén, Isabel; Porter-Bolland, Luciana; Reyes-Garcia, Victoria

    2014-12-01

    In Mexico, biodiversity conservation is primarily implemented through three schemes: 1) protected areas, 2) payment-based schemes for environmental services, and 3) community-based conservation, officially recognized in some cases as Indigenous and Community Conserved Areas. In this paper we compare levels of local participation across conservation schemes. Through a survey applied to 670 households across six communities in Southeast Mexico, we document local participation during the creation, design, and implementation of the management plan of different conservation schemes. To analyze the data, we first calculated the frequency of participation at the three different stages mentioned, then created a participation index that characterizes the presence and relative intensity of local participation for each conservation scheme. Results showed that there is a low level of local participation across all the conservation schemes explored in this study. Nonetheless, the payment for environmental services had the highest local participation while the protected areas had the least. Our findings suggest that local participation in biodiversity conservation schemes is not a predictable outcome of a specific (community-based) model, thus implying that other factors might be important in determining local participation. This has implications on future strategies that seek to encourage local involvement in conservation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Comparing bivalent and quadrivalent human papillomavirus vaccines: economic evaluation based on transmission model.

    Science.gov (United States)

    Jit, Mark; Chapman, Ruth; Hughes, Owain; Choi, Yoon Hong

    2011-09-27

    To compare the effect and cost effectiveness of bivalent and quadrivalent human papillomavirus (HPV) vaccination, taking into account differences in licensure indications, protection against non-vaccine type disease, protection against disease related to HPV types 6 and 11, and reported long term immunogenicity. A model of HPV transmission and disease previously used to inform UK vaccination policy, updated with recent evidence and expanded to include scenarios where the two vaccines differ in duration of protection, cross protection, and end points prevented. United Kingdom. Population Males and females aged 12-75 years. Incremental cost effectiveness ratios for both vaccines and additional cost per dose for the quadrivalent vaccine to be equally cost effective as the bivalent vaccine. The bivalent vaccine needs to be cheaper than the quadrivalent vaccine to be equally cost effective, mainly because of its lack of protection against anogenital warts. The price difference per dose ranges from a median of £19 (interquartile range £12-£27) to £35 (£27-£44) across scenarios about vaccine duration, cross protection, and end points prevented (assuming one quality adjusted life year (QALY) is valued at £30,000 and both vaccines can prevent all types of HPV related cancers). The quadrivalent vaccine may have an advantage over the bivalent vaccine in reducing healthcare costs and QALYs lost. The bivalent vaccine may have an advantage in preventing death due to cancer. However, considerable uncertainty remains about the differential benefit of the two vaccines.

  5. Comparing clustering models in bank customers: Based on Fuzzy relational clustering approach

    Directory of Open Access Journals (Sweden)

    Ayad Hendalianpour

    2016-11-01

    Full Text Available Clustering is absolutely useful information to explore data structures and has been employed in many places. It organizes a set of objects into similar groups called clusters, and the objects within one cluster are both highly similar and dissimilar with the objects in other clusters. The K-mean, C-mean, Fuzzy C-mean and Kernel K-mean algorithms are the most popular clustering algorithms for their easy implementation and fast work, but in some cases we cannot use these algorithms. Regarding this, in this paper, a hybrid model for customer clustering is presented that is applicable in five banks of Fars Province, Shiraz, Iran. In this way, the fuzzy relation among customers is defined by using their features described in linguistic and quantitative variables. As follows, the customers of banks are grouped according to K-mean, C-mean, Fuzzy C-mean and Kernel K-mean algorithms and the proposed Fuzzy Relation Clustering (FRC algorithm. The aim of this paper is to show how to choose the best clustering algorithms based on density-based clustering and present a new clustering algorithm for both crisp and fuzzy variables. Finally, we apply the proposed approach to five datasets of customer's segmentation in banks. The result of the FCR shows the accuracy and high performance of FRC compared other clustering methods.

  6. A COMPARATIVE STUDY OF SIMULATION AND TIME SERIES MODEL IN QUANTIFYING BULLWHIP EFFECT IN SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    T. V. O. Fabson

    2011-11-01

    Full Text Available Bullwhip (or whiplash effect is an observed phenomenon in forecast driven distribution channeland careful management of these effects is of great importance to managers of supply chain.Bullwhip effect refers to situations where orders to the suppliers tend to have larger variance thansales to the buyer (demand distortion and the distortion increases as we move up the supply chain.Due to the fact that demand of customer for product is unstable, business managers must forecast inorder to properly position inventory and other resources. Forecasts are statistically based and in mostcases, are not very accurate. The existence of forecast errors made it necessary for organizations tooften carry an inventory buffer called “safety stock”. Moving up the supply chain from the end userscustomers to raw materials supplier there is a lot of variation in demand that can be observed, whichcall for greater need for safety stock.This study compares the efficacy of simulation and Time Series model in quantifying the bullwhipeffects in supply chain management.

  7. Comparative efficacies of candidate antibiotics against Yersinia pestis in an in vitro pharmacodynamic model.

    Science.gov (United States)

    Louie, Arnold; Vanscoy, Brian; Liu, Weiguo; Kulawy, Robert; Brown, David; Heine, Henry S; Drusano, George L

    2011-06-01

    Yersinia pestis, the bacterium that causes plague, is a potential agent of bioterrorism. Streptomycin is the "gold standard" for the treatment of plague infections in humans, but the drug is not available in many countries, and resistance to this antibiotic occurs naturally and has been generated in the laboratory. Other antibiotics have been shown to be active against Y. pestis in vitro and in vivo. However, the relative efficacies of clinically prescribed regimens of these antibiotics with streptomycin and with each other for the killing of Yersinia pestis are unknown. The efficacies of simulated pharmacokinetic profiles for human 10-day clinical regimens of ampicillin, meropenem, moxifloxacin, ciprofloxacin, and gentamicin were compared with the gold standard, streptomycin, for killing of Yersinia pestis in an in vitro pharmacodynamic model. Resistance amplification with therapy was also assessed. Streptomycin killed the microbe in one trial but failed due to resistance amplification in the second trial. In two trials, the other antibiotics consistently reduced the bacterial densities within the pharmacodynamic systems from 10⁸ CFU/ml to undetectable levels (pestis and deserve further evaluation.

  8. COMPARING 3D FOOT SHAPE MODELS BETWEEN TAIWANESE AND JAPANESE FEMALES.

    Science.gov (United States)

    Lee, Yu-Chi; Kouchi, Makiko; Mochimaru, Masaaki; Wang, Mao-Jiun

    2015-06-01

    This study compares foot shape and foot dimensions between Taiwanese and Japanese females. One hundred Taiwanese and 100 Japanese female 3D foot scanning data were used for comparison. To avoid the allometry effect, data from 23 Taiwanese and 19 Japanese with foot length between 233 to 237 mm were used for shape comparison. Homologous models created for the right feet of the 42 subjects were analyzed by Multidimensional Scaling. The results showed that there were significant differences in the forefoot shape between the two groups, and Taiwanese females had slightly wider feet with straighter big toe than Japanese females. The results of body and foot dimension comparison indicated that Taiwanese females were taller, heavier and had larger feet than Japanese females, while Japanese females had significantly larger toe 1 angle. Since some Taiwanese shoemakers adopt the Japanese shoe sizing system for making shoes, appropriateness of the shoe sizing system was also discussed. The present results provide very useful information for improving shoe last design and footwear fit for Taiwanese females.

  9. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model.

    Directory of Open Access Journals (Sweden)

    Elisa Eggerbauer

    2017-06-01

    Full Text Available European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c. with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m. with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n. with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates.

  10. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model.

    Science.gov (United States)

    Eggerbauer, Elisa; Pfaff, Florian; Finke, Stefan; Höper, Dirk; Beer, Martin; Mettenleiter, Thomas C; Nolden, Tobias; Teifke, Jens-Peter; Müller, Thomas; Freuling, Conrad M

    2017-06-01

    European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c.) with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m.) with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n.) with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates.

  11. A comparative study on entrepreneurial attitudes modeled with logistic regression and Bayes nets.

    Science.gov (United States)

    López Puga, Jorge; García García, Juan

    2012-11-01

    Entrepreneurship research is receiving increasing attention in our context, as entrepreneurs are key social agents involved in economic development. We compare the success of the dichotomic logistic regression model and the Bayes simple classifier to predict entrepreneurship, after manipulating the percentage of missing data and the level of categorization in predictors. A sample of undergraduate university students (N = 1230) completed five scales (motivation, attitude towards business creation, obstacles, deficiencies, and training needs) and we found that each of them predicted different aspects of the tendency to business creation. Additionally, our results show that the receiver operating characteristic (ROC) curve is affected by the rate of missing data in both techniques, but logistic regression seems to be more vulnerable when faced with missing data, whereas Bayes nets underperform slightly when categorization has been manipulated. Our study sheds light on the potential entrepreneur profile and we propose to use Bayesian networks as an additional alternative to overcome the weaknesses of logistic regression when missing data are present in applied research.

  12. Comprehensive School Reform Models: A Study Guide for Comparing CSR Models (and How Well They Meet Minnesota's Learning Standards).

    Science.gov (United States)

    St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma

    A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…

  13. Nonword Reading: Comparing Dual-Route Cascaded and Connectionist Dual-Process Models with Human Data

    Science.gov (United States)

    Pritchard, Stephen C.; Coltheart, Max; Palethorpe, Sallyanne; Castles, Anne

    2012-01-01

    Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither…

  14. Population modelling to compare chronic external radiotoxicity between individual and population endpoints in four taxonomic groups

    International Nuclear Information System (INIS)

    Alonzo, Frédéric; Hertel-Aas, Turid; Real, Almudena; Lance, Emilie; Garcia-Sanchez, Laurent; Bradshaw, Clare; Vives i Batlle, Jordi; Oughton, Deborah H.; Garnier-Laplace, Jacqueline

    2016-01-01

    In this study, we modelled population responses to chronic external gamma radiation in 12 laboratory species (including aquatic and soil invertebrates, fish and terrestrial mammals). Our aim was to compare radiosensitivity between individual and population endpoints and to examine how internationally proposed benchmarks for environmental radioprotection protected species against various risks at the population level. To do so, we used population matrix models, combining life history and chronic radiotoxicity data (derived from laboratory experiments and described in the literature and the FREDERICA database) to simulate changes in population endpoints (net reproductive rate R_0, asymptotic population growth rate λ, equilibrium population size N_e_q) for a range of dose rates. Elasticity analyses of models showed that population responses differed depending on the affected individual endpoint (juvenile or adult survival, delay in maturity or reduction in fecundity), the considered population endpoint (R_0, λ or N_e_q) and the life history of the studied species. Among population endpoints, net reproductive rate R_0 showed the lowest EDR_1_0 (effective dose rate inducing 10% effect) in all species, with values ranging from 26 μGy h"−"1 in the mouse Mus musculus to 38,000 μGy h"−"1 in the fish Oryzias latipes. For several species, EDR_1_0 for population endpoints were lower than the lowest EDR_1_0 for individual endpoints. Various population level risks, differing in severity for the population, were investigated. Population extinction (predicted when radiation effects caused population growth rate λ to decrease below 1, indicating that no population growth in the long term) was predicted for dose rates ranging from 2700 μGy h"−"1 in fish to 12,000 μGy h"−"1 in soil invertebrates. A milder risk, that population growth rate λ will be reduced by 10% of the reduction causing extinction, was predicted for dose rates ranging from 24 μGy h"−"1

  15. Comparative study of measured and modelled number concentrations of nanoparticles in an urban street canyon

    DEFF Research Database (Denmark)

    Kumar, Prashant; Garmory, Andrew; Ketzel, Matthias

    2009-01-01

    Pollution Model (OSPM) and Computational Fluid Dynamics (CFD) code FLUENT. All models disregarded any particle dynamics. CFD simulations have been carried out in a simplified geometry of the selected street canyon. Four different sizes of emission sources have been used in the CFD simulations to assess......This study presents a comparison between measured and modelled particle number concentrations (PNCs) in the 10-300 nm size range at different heights in a canyon. The PNCs were modelled using a simple modelling approach (modified Box model, including vertical variation), an Operational Street...... the effect of source size on mean PNC distributions in the street canyon. The measured PNCs were between a factor of two and three of those from the three models, suggesting that if the model inputs are chosen carefully, even a simplified approach can predict the PNCs as well as more complex models. CFD...

  16. Recent results on the spatiotemporal modelling and comparative analysis of Black Death and bubonic plague epidemics

    Science.gov (United States)

    Christakos, G.; Olea, R.A.; Yu, H.-L.

    2007-01-01

    Background: This work demonstrates the importance of spatiotemporal stochastic modelling in constructing maps of major epidemics from fragmentary information, assessing population impacts, searching for possible etiologies, and performing comparative analysis of epidemics. Methods: Based on the theory previously published by the authors and incorporating new knowledge bases, informative maps of the composite space-time distributions were generated for important characteristics of two major epidemics: Black Death (14th century Western Europe) and bubonic plague (19th-20th century Indian subcontinent). Results: The comparative spatiotemporal analysis of the epidemics led to a number of interesting findings: (1) the two epidemics exhibited certain differences in their spatiotemporal characteristics (correlation structures, trends, occurrence patterns and propagation speeds) that need to be explained by means of an interdisciplinary effort; (2) geographical epidemic indicators confirmed in a rigorous quantitative manner the partial findings of isolated reports and time series that Black Death mortality was two orders of magnitude higher than that of bubonic plague; (3) modern bubonic plague is a rural disease hitting harder the small villages in the countryside whereas Black Death was a devastating epidemic that indiscriminately attacked large urban centres and the countryside, and while the epidemic in India lasted uninterruptedly for five decades, in Western Europe it lasted three and a half years; (4) the epidemics had reverse areal extension features in response to annual seasonal variations. Temperature increase at the end of winter led to an expansion of infected geographical area for Black Death and a reduction for bubonic plague, reaching a climax at the end of spring when the infected area in Western Europe was always larger than in India. Conversely, without exception, the infected area during winter was larger for the Indian bubonic plague; (5) during the

  17. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  18. Comparative Evaluation of Five Fire Emissions Datasets Using the GEOS-5 Model

    Science.gov (United States)

    Ichoku, C. M.; Pan, X.; Chin, M.; Bian, H.; Darmenov, A.; Ellison, L.; Kucsera, T. L.; da Silva, A. M., Jr.; Petrenko, M. M.; Wang, J.; Ge, C.; Wiedinmyer, C.

    2017-12-01

    Wildfires and other types of biomass burning affect most vegetated parts of the globe, contributing 40% of the annual global atmospheric loading of carbonaceous aerosols, as well as significant amounts of numerous trace gases, such as carbon dioxide, carbon monoxide, and methane. Many of these smoke constituents affect the air quality and/or the climate system directly or through their interactions with solar radiation and cloud properties. However, fire emissions are poorly constrained in global and regional models, resulting in high levels of uncertainty in understanding their real impacts. With the advent of satellite remote sensing of fires and burned areas in the last couple of decades, a number of fire emissions products have become available for use in relevant research and applications. In this study, we evaluated five global biomass burning emissions datasets, namely: (1) GFEDv3.1 (Global Fire Emissions Database version 3.1); (2) GFEDv4s (Global Fire Emissions Database version 4 with small fires); (3) FEERv1 (Fire Energetics and Emissions Research version 1.0); (4) QFEDv2.4 (Quick Fire Emissions Dataset version 2.4); and (5) Fire INventory from NCAR (FINN) version 1.5. Overall, the spatial patterns of biomass burning emissions from these inventories are similar, although the magnitudes of the emissions can be noticeably different. The inventories derived using top-down approaches (QFEDv2.4 and FEERv1) are larger than those based on bottom-up approaches. For example, global organic carbon (OC) emissions in 2008 are: QFEDv2.4 (51.93 Tg), FEERv1 (28.48 Tg), FINN v1.5 (19.48 Tg), GFEDv3.1 (15.65 Tg) and GFEDv4s (13.76 Tg); representing a factor of 3.7 difference between the largest and the least. We also used all five biomass-burning emissions datasets to conduct aerosol simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5), and compared the resulting aerosol optical depth (AOD) output to the corresponding retrievals from MODIS

  19. Methodology and results of the impacts of modeling electric utilities: a comparative evaluation of MEMM and REM

    International Nuclear Information System (INIS)

    1981-09-01

    This study compares two models of the US electric utility industry including the EIA's electric utility submodel in the Midterm Energy Market Model (MEMM), and the Baughman-Joskow Regionalized Electricity Model (REM). The method of comparison emphasizes reconciliation of differences in data common to both models, and the performance of simulation experiments to evaluate the empirical significance of certain structural differences in the models. The major research goal was to contrast and compare the effects of alternative modeling structures and data assumptions on model results; and, particularly to considered each model's approach to the impacts of generation technology and fuel use choices on electric utilities. The methodology used was to run the REM model first without and, then, with a representation of the Power Plant and Industrial Fuel Act of 1978, assuming medium supply and demand curves and varying fuel prices. The models and data structures of the two models are described. The original 1978 data used in MEMM and REM are analyzed and compared. The computations and effects of different assumptions on fuel use decisions are discussed. The adjusted REM data required for the experiments are presented. Simulation results of the two models are compared. These results represent projections for 1985, 1990, and 1995 of: US power generation by plant type; amounts of each type of fuel used for power generation; average electricity prices; and the effects of additional or fewer nuclear and coal-fired plants. A significant result is that the REM model exhibits about 7 times as much gas and oil consumption in 1995 as the MEMM model. Continuing simulation experiments on MEMM are recommended to determine whether the input data to MEMM are reasonable and properly adjusted

  20. A Comparative Study of McDonald’s Wedding Narratives with the Model of Anchoring

    Directory of Open Access Journals (Sweden)

    Mimi Huang

    2016-12-01

    Full Text Available Fast-food giant McDonald’s announced in 2010 that they would start hosting wedding ceremonies and receptions for couples who would like to get married in their restaurants in Hong Kong. This paper conducts a study comparing the differing representations of McDonald’s wedding services through a narrative analytical approach. Specifically, this paper examines relevant discourses surrounding the launch of the corporation’s wedding services from the British media (e.g. Daily Mail, the Independent as well as public discourses in Hong Kong (e.g. McDonald’s Hong Kong website, and CNN’s Hong Kong news.  It is found that these narratives have a significant degree of discrepancy in depicting McDonald’s wedding stories. These differences further raise the question of how differing narrative strategies are employed to conceptualise the brand’s emergent wedding narratives in a unique social-cultural environment.  In the discussion of McDonald’s wedding stories, the focus is placed on the cognitive and linguistic aspects of the discourse. An analytical model of “anchoring” will be proposed and applied to investigate the cooperation’s marketing strategies as well as the media’s reaction towards such promotions. It is argued that a narrative can promote or demote a brand’s identity and position through the process of anchoring. It is further argued that anchoring is an important cognitive-psychological strategy in conceptualization and meaning construction. Keywords: narrative inquiry, cognitive narratology, anchors, anchoring, meaning construction

  1. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    Science.gov (United States)

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  2. Genome Sequencing and Comparative Transcriptomics of the Model Entomopathogenic Fungi Metarhizium anisopliae and M. acridum

    Science.gov (United States)

    Shang, Yanfang; Duan, Zhibing; Hu, Xiao; Xie, Xue-Qin; Zhou, Gang; Peng, Guoxiong; Luo, Zhibing; Huang, Wei; Wang, Bing; Fang, Weiguo; Wang, Sibao; Zhong, Yi; Ma, Li-Jun; St. Leger, Raymond J.; Zhao, Guo-Ping; Pei, Yan; Feng, Ming-Guang; Xia, Yuxian; Wang, Chengshu

    2011-01-01

    Metarhizium spp. are being used as environmentally friendly alternatives to chemical insecticides, as model systems for studying insect-fungus interactions, and as a resource of genes for biotechnology. We present a comparative analysis of the genome sequences of the broad-spectrum insect pathogen Metarhizium anisopliae and the acridid-specific M. acridum. Whole-genome analyses indicate that the genome structures of these two species are highly syntenic and suggest that the genus Metarhizium evolved from plant endophytes or pathogens. Both M. anisopliae and M. acridum have a strikingly larger proportion of genes encoding secreted proteins than other fungi, while ∼30% of these have no functionally characterized homologs, suggesting hitherto unsuspected interactions between fungal pathogens and insects. The analysis of transposase genes provided evidence of repeat-induced point mutations occurring in M. acridum but not in M. anisopliae. With the help of pathogen-host interaction gene database, ∼16% of Metarhizium genes were identified that are similar to experimentally verified genes involved in pathogenicity in other fungi, particularly plant pathogens. However, relative to M. acridum, M. anisopliae has evolved with many expanded gene families of proteases, chitinases, cytochrome P450s, polyketide synthases, and nonribosomal peptide synthetases for cuticle-degradation, detoxification, and toxin biosynthesis that may facilitate its ability to adapt to heterogenous environments. Transcriptional analysis of both fungi during early infection processes provided further insights into the genes and pathways involved in infectivity and specificity. Of particular note, M. acridum transcribed distinct G-protein coupled receptors on cuticles from locusts (the natural hosts) and cockroaches, whereas M. anisopliae transcribed the same receptor on both hosts. This study will facilitate the identification of virulence genes and the development of improved biocontrol strains

  3. A comparative study of covariance selection models for the inference of gene regulatory networks.

    Science.gov (United States)

    Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola

    2013-10-01

    The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by

  4. A comparative study of approaches to direct methanol fuel cells modelling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, V.B.; Falcao, D.S.; Pinto, A.M.F.R. [Centro de Estudos de Fenomenos de Transporte, Departamento de Eng. Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal); Rangel, C.M. [Instituto Nacional de Engenharia, Tecnologia e Inovacao, Paco do Lumiar, 22,1649-038 (Portugal)

    2007-03-15

    Fuel cell modelling has received much attention over the past decade in an attempt to better understand the phenomena occurring within the cell. Mathematical models and simulation are needed as tools for design optimization of fuel cells, stacks and fuel cell power systems. Analytical, semi-empirical and mechanistic models for direct methanol fuel cells (DMFC) are reviewed. Effective models were until now developed describing the fundamental electrochemical and transport phenomena taking place in the cell. More research is required to develop models that can account for the two-phase flows occurring in the anode and cathode of the DMFC. The merits and demerits of the models are presented. Selected models of different categories are implemented and discussed. Finally, one of the selected simplified models is proposed as a computer-aided tool for real-time system level DMFC calculations. (author)

  5. Mechanistic models of bone cancer induction by radium and plutonium in animals compared to humans

    International Nuclear Information System (INIS)

    Bijwaard, H.

    2006-01-01

    Two-mutation carcinogenesis models of mice and rats injected with 239 Pu and 226 Ra have been derived extending previous modellings of beagle dogs injected with 239 Pu and 226 Ra and radium dial painters. In all cases statistically significant parameters could be derived fitting data from several research groups jointly. This also lead to similarly parametrized models for 239 Pu and 226 Ra for all species. For each data set not more than five free model parameters were needed to fit the data adequately. From the toxicity ratios of the animal models for 239 Pu and 226 Ra, together with the human model for 226 Ra, an approximate model for the exposure of humans to 239 Pu has been derived. Relative risk calculations with this approximate model are in good agreement with epidemiological findings for the plutonium-exposed Mayak workers. This promising result may indicate new possibilities for estimating risks for humans from animal experiments. (authors)

  6. A comparative approach to computer aided design model of a dog femur.

    Science.gov (United States)

    Turamanlar, O; Verim, O; Karabulut, A

    2016-01-01

    Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.

  7. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  8. Interannual sedimentary effluxes of alkalinity in the southern North Sea: Model results compared with summer observations

    OpenAIRE

    Pätsch, Johannes; Kühn, Wilfried; Six, Katharina D.

    2018-01-01

    For the sediments of the central and southern North Sea different sources of alkalinity generation are quantified by a regional modelling system for the period 2000–2014. For this purpose a formerly global ocean sediment model coupled with a pelagic ecosystem model is adopted to shelf sea dynamics where much larger turnover rates than in the open and deep ocean occurs. To track alkalinity changes due to different nitrogen-related processes the open ocean sediment model was extended by t...

  9. TDHF-motivated macroscopic model for heavy ion collisions: a comparative study

    International Nuclear Information System (INIS)

    Biedermann, M.; Reif, R.; Maedler, P.

    1984-01-01

    A detailed investigation of Bertshc's classical TDHF-motivated model for the description of heavy ion collisions is performed. The model agrees well with TDHF and phenomenological models which include deformation degrees of freedom as well as with experimental data. Some quantitative deviations from experiment and/or TDHF can be removed to a large extent if the standard model parameters are considered as adjustable parameters in physically reasonable regions of variation

  10. Determination of Hamiltonian matrix for IBM4 and compare it is self value with shells model

    International Nuclear Information System (INIS)

    Slyman, S.; Hadad, S.; Souman, H.

    2004-01-01

    The Hamiltonian is determined using the procedure OAI and the mapping of (IBM4) states into the shell model, which is based on the seniority classification scheme. A boson sub-matrix of the shell model Hamiltonian for the (sd) 4 configuration is constructed, and is proved to produce the same eigenvalues as the shell model Hamiltonian for the corresponding fermion states. (authors)

  11. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Science.gov (United States)

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  12. Aluminium in an ocean general circulation model compared with the West Atlantic Geotraces cruises

    CSIR Research Space (South Africa)

    Van Hulten, M

    2013-10-01

    Full Text Available A model of aluminium has been developed and implemented in an Ocean General Circulation Model (NEMO-PISCES). In the model, aluminium enters the ocean by means of dust deposition. The internal oceanic processes are described by advection, mixing...

  13. Comparing Three Patterns of Strengths and Weaknesses Models for the Identification of Specific Learning Disabilities

    Science.gov (United States)

    Miller, Daniel C.; Maricle, Denise E.; Jones, Alicia M.

    2016-01-01

    Processing Strengths and Weaknesses (PSW) models have been proposed as a method for identifying specific learning disabilities. Three PSW models were examined for their ability to predict expert identified specific learning disabilities cases. The Dual Discrepancy/Consistency Model (DD/C; Flanagan, Ortiz, & Alfonso, 2013) as operationalized by…

  14. A COMPARATIVE STUDY OF FORECASTING MODELS FOR TREND AND SEASONAL TIME SERIES DOES COMPLEX MODEL ALWAYS YIELD BETTER FORECAST THAN SIMPLE MODELS

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2005-01-01

    Full Text Available Many business and economic time series are non-stationary time series that contain trend and seasonal variations. Seasonality is a periodic and recurrent pattern caused by factors such as weather, holidays, or repeating promotions. A stochastic trend is often accompanied with the seasonal variations and can have a significant impact on various forecasting methods. In this paper, we will investigate and compare some forecasting methods for modeling time series with both trend and seasonal patterns. These methods are Winter's, Decomposition, Time Series Regression, ARIMA and Neural Networks models. In this empirical research, we study on the effectiveness of the forecasting performance, particularly to answer whether a complex method always give a better forecast than a simpler method. We use a real data, that is airline passenger data. The result shows that the more complex model does not always yield a better result than a simpler one. Additionally, we also find the possibility to do further research especially the use of hybrid model by combining some forecasting method to get better forecast, for example combination between decomposition (as data preprocessing and neural network model.

  15. Contextualizing Teacher Autonomy in Time and Space: A Model for Comparing Various Forms of Governing the Teaching Profession

    Science.gov (United States)

    Wermke, Wieland; Höstfält, Gabriella

    2014-01-01

    This study aims to develop a model for comparing different forms of teacher autonomy in various national contexts and at different times. Understanding and explaining local differences and global similarities in the teaching profession in a globalized world require conceptions that contribute to further theorization of comparative and…

  16. Projecting future expansion of invasive species: comparing and improving methodologies for species distribution modeling.

    Science.gov (United States)

    Mainali, Kumar P; Warren, Dan L; Dhileepan, Kunjithapatham; McConnachie, Andrew; Strathie, Lorraine; Hassan, Gul; Karki, Debendra; Shrestha, Bharat B; Parmesan, Camille

    2015-12-01

    Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships for Parthenium hysterophorus L. (Asteraceae) with four modeling methods run with multiple scenarios of (i) sources of occurrences and geographically isolated background ranges for absences, (ii) approaches to drawing background (absence) points, and (iii) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved using a global dataset for model training, rather than restricting data input to the species' native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e., into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g., boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post hoc test conducted on a new Parthenium dataset from Nepal validated excellent predictive performance of our 'best' model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for parthenium

  17. Noise model for serrated trailing edges compared to wind tunnel measurements

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Shen, Wen Zhong

    2016-01-01

    A new CFD RANS based method to predict the far field sound pressure emitted from an aerofoil with serrated trailing edge has been developed. The model was validated by comparison to measurements conducted in the Virginia Tech Stability Wind Tunnel. The model predicted 3 dB lower sound pressure...... levels, but the tendencies for the different configurations were predicted correctly. Therefore the model can be used to optimise the serration geometry. A disadvantage of the new model is that the computational costs are significantly higher than for the Amiet model for a straight trailing edge. However...

  18. A comparative study of various inflow boundary conditions and turbulence models for wind turbine wake predictions

    Science.gov (United States)

    Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling

    2018-05-01

    This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.

  19. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Science.gov (United States)

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  20. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment

    Science.gov (United States)

    Warner, Zachary B.

    2013-01-01

    This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…

  1. MAX-DOAS tropospheric nitrogen dioxide column measurements compared with the Lotos-Euros air quality model

    NARCIS (Netherlands)

    Vlemmix, T.; Eskes, H.J.; Piters, A.J.M.; Schaap, M.; Sauter, F.J.; Kelder, H.; Levelt, P.F.

    2015-01-01

    A 14-month data set of MAX-DOAS (Multi-Axis Differential Optical Absorption Spectroscopy) tropospheric NO2 column observations in De Bilt, the Netherlands, has been compared with the regional air quality model Lotos-Euros. The model was run on a 7×7 km2 grid, the same resolution as the emission

  2. A comparative study on GM (1,1) and FRMGM (1,1) model in forecasting FBM KLCI

    Science.gov (United States)

    Ying, Sah Pei; Zakaria, Syerrina; Mutalib, Sharifah Sakinah Syed Abd

    2017-11-01

    FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBM KLCI) is a group of indexes combined in a standardized way and is used to measure the Malaysia overall market across the time. Although composite index can give ideas about stock market to investors, it is hard to predict accurately because it is volatile and it is necessary to identify a best model to forecast FBM KLCI. The objective of this study is to determine the most accurate forecasting model between GM (1,1) model and Fourier Residual Modification GM (1,1) (FRMGM (1,1)) model to forecast FBM KLCI. In this study, the actual daily closing data of FBM KLCI was collected from January 1, 2016 to March 15, 2016. GM (1,1) model and FRMGM (1,1) model were used to build the grey model and to test forecasting power of both models. Mean Absolute Percentage Error (MAPE) was used as a measure to determine the best model. Forecasted value by FRMGM (1,1) model do not differ much than the actual value compare to GM (1,1) model for in-sample and out-sample data. Results from MAPE also show that FRMGM (1,1) model is lower than GM (1,1) model for in-sample and out-sample data. These results shown that FRMGM (1,1) model is better than GM (1,1) model to forecast FBM KLCI.

  3. Comparing and Contrasting Traditional Membrane Bioreactor Models with Novel Ones Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Parneet Paul

    2013-02-01

    Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.

  4. A comparative study of behaviors of ventilated supercavities between experimental models with different mounting configurations

    International Nuclear Information System (INIS)

    Lee, Seung-Jae; Karn, Ashish; Arndt, Roger E A; Kawakami, Ellison

    2016-01-01

    Small-scale water tunnel experiments of the phenomenon of supercavitation can be carried out broadly using two different kinds of experimental models–in the first model (forward facing model, or FFM), the incoming flow first interacts with the cavitator at front, which is connected to the strut through a ventilation pipe. The second model could have the strut and the ventilation pipe preceding the cavitator (backward facing model, or BFM). This is the continuation of a water tunnel study of the effects of unsteady flows on axisymmetric supercavities. In this study, the unwanted effect of test model configuration on supercavity shape in periodic flows was explored through a comparison of FFM and BFM models. In our experiments, it was found that periodic gust flows have only a minimal effect on the maximum diameter and the cavity length can be shortened above a certain vertical velocity of periodic flows. These findings appear to be robust regardless of the model configuration. (paper)

  5. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    Science.gov (United States)

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  6. A comparative study of behaviors of ventilated supercavities between experimental models with different mounting configurations

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Jae; Karn, Ashish; Arndt, Roger E A [Saint Anthony Falls Laboratory, University of Minnesota (United States); Kawakami, Ellison, E-mail: hul94@snu.ac.kr, E-mail: ekawakami@mmm.com, E-mail: karn@umn.edu, E-mail: arndt001@umn.edu [3M Corporate Research Process Laboratory (United States)

    2016-08-15

    Small-scale water tunnel experiments of the phenomenon of supercavitation can be carried out broadly using two different kinds of experimental models–in the first model (forward facing model, or FFM), the incoming flow first interacts with the cavitator at front, which is connected to the strut through a ventilation pipe. The second model could have the strut and the ventilation pipe preceding the cavitator (backward facing model, or BFM). This is the continuation of a water tunnel study of the effects of unsteady flows on axisymmetric supercavities. In this study, the unwanted effect of test model configuration on supercavity shape in periodic flows was explored through a comparison of FFM and BFM models. In our experiments, it was found that periodic gust flows have only a minimal effect on the maximum diameter and the cavity length can be shortened above a certain vertical velocity of periodic flows. These findings appear to be robust regardless of the model configuration. (paper)

  7. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  8. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Directory of Open Access Journals (Sweden)

    Biagioli Bonizella

    2007-11-01

    Full Text Available Abstract Background Different methods have recently been proposed for predicting morbidity in intensive care units (ICU. The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications. Methods Models based on Bayes rule, k-nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view. Results Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. k-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical. Conclusion Knowledge of model

  9. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    Science.gov (United States)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation

  10. Detailed characterizations of a Comparative Reactivity Method (CRM) instrument: experiments vs. modelling

    Science.gov (United States)

    Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.

    2015-04-01

    The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a

  11. Comparative efficacy of curcumin and paromomycin against Cryptosporidium parvum infection in a BALB/c model.

    Science.gov (United States)

    Asadpour, Mohammad; Namazi, Fatemeh; Razavi, Seyed Mostafa; Nazifi, Saeed

    2018-01-30

    Cryptosporidium is a ubiquitous protozoan parasite causing gastrointestinal disorder in various hosts worldwide. The disease is self-limiting in the immunocompetent but life-threatening in immunodeficient individuals. Investigations to find an effective drug for the complete elimination of the Cryptosporidium infection are ongoing and urgently needed. The current study was undertaken to examine the anti-cryptosporidial efficacy of curcumin in experimentally infected mice compared with that of paromomycin. Oocysts were isolated from a pre-weaned dairy calf and identified as Cryptosporidium parvum using a nested- polymerase chain reaction (PCR) on Small subunit ribosomal ribonucleic acid (SSU rRNA) gene and sequencing analysis. One hundred and ten female BALB/c mice were divided into five groups. Group 1 was infected and treated with curcumin; Group 2 infected and treated with paromomycin; Group 3 infected without treatment; Group 4 included uninfected mice treated with curcumin, and Group 5 included uninfected mice treated with distilled water for 11 successive days, starting on the first day of oocyst shedding. The oocyst shedding was recorded daily. At days 0, 3, 7, and 11 of post treatments, five mice from each group were killed humanly; jejunum and ileum tissue samples were processed for histopathological evaluation and counting of oocyst on villi, simultaneously. Furthermore, total antioxidant capacity (TAC) and malondialdehyde (MDA) concentrations in affected tissues were also measured in different groups. By treatments, tissue lesions and the number of oocyst on villi of both jejunum and ileum were decreased with a time-dependent manner. In comparison with Group 3, oocyst shedding was stopped at the end of treatment period in both groups 1 and 2 without recurrence at 10days after drug withdrawal. Also, TAC was increased and the MDA concentrations were decreased in Group 1. Moreover, paromomycin showed acceptable treatment outcomes during experiment and its

  12. Who influenced inflation persistence in China? A comparative analysis of the standard CIA model and CIA model with endogenous money

    Directory of Open Access Journals (Sweden)

    Liao Ying

    2013-12-01

    Full Text Available In this paper, we examine the influencing factors of inflation persistence in China’s economy using the DSGE approach. Two monetary DSGE models are estimated, namely, a standard CIA model and a CIA model with a Taylor rule. This article uses the Bayesian method to estimate the model, and the estimated and inferred results are credible due to the Markov chain reaching convergence. The results show that the augmented model outperforms the standard CIA model in terms of capturing inflation persistence. Further studies show that inflation persistence mainly comes from the persistence of the money supply, while money supply uncertainty, the reaction coefficient of monetary growth to productivity, productivity persistence and productivity uncertainty have a smaller impact on inflation persistence. Changes of monetary policy have little effect on inflation persistence.

  13. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    DEFF Research Database (Denmark)

    Salo, T J; Palosuo, T; Kersebaum, K C

    2016-01-01

    Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen......, Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included six N rates ranging from 0 to 150 kg N/ha. Calibration data consisted of weather, soil, phenology, leaf area...... ranged from 170 to 870 kg/ha. During the test year 2009, most models failed to accurately reproduce the observed low yield without N fertilizer as well as the steep yield response to N applications. The multi-model predictions were closer to observations than most single-model predictions, but multi...

  14. Methods of comparing associative models and an application to retrospective revaluation.

    Science.gov (United States)

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A comparative analysis of reactor lower head debris cooling models employed in the existing severe accident analysis codes

    International Nuclear Information System (INIS)

    Ahn, K.I.; Kim, D.H.; Kim, S.B.; Kim, H.D.

    1998-08-01

    MELCOR and MAAP4 are the representative severe accident analysis codes which have been developed for the integral analysis of the phenomenological reactor lower head corium cooling behavior. Main objectives of the present study is to identify merits and disadvantages of each relevant model through the comparative analysis of the lower plenum corium cooling models employed in these two codes. The final results will be utilized for the development of LILAC phenomenological models and for the continuous improvement of the existing MELCOR reactor lower head models, which are currently being performed at the KAERI. For these purposes, first, nine reference models are selected featuring the lower head corium behavior based on the existing experimental evidences and related models. Then main features of the selected models have been critically analyzed, and finally merits and disadvantages of each corresponding model have been summarized in the view point of realistic corium behavior and reasonable modeling. Being on these evidences, summarized and presented the potential improvements for developing more advanced models. The present study has been focused on the qualitative comparison of each model and so more detailed quantitative analysis is strongly required to obtain the final conclusions for their merits and disadvantages. In addition, in order to compensate the limitations of the current model, required further studies relating closely the detailed mechanistic models with the molten material movement and heat transfer based on phase-change in the porous medium, to the existing simple models. (author). 36 refs

  16. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    OpenAIRE

    Salo , Tapio J.; Palosuo , Taru; Kersebaum , Kurt Christian; Nendel , Claas; Angulo , Carlos; Ewert , Frank; Bindi , Marco; Calanca , Pierluigi; Klein , Tommy; Moriondo , Marco; Ferrise , Roberto; Olesen , Jørgen Eivind; Patil , Rasmi H.; Ruget , Francoise; Takac , Jozef

    2016-01-01

    Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen, Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included si...

  17. [Categories and characteristics of BPH drug evaluation models: a comparative study].

    Science.gov (United States)

    Huang, Dong-Yan; Wu, Jian-Hui; Sun, Zu-Yue

    2014-02-01

    Benign prostatic hyperplasia (BPH) is a worldwide common disease in men over 50 years old, and the exact cause of BPH remains largely unknown. In order to elucidate its pathogenesis and screen effective drugs for the treatment of BPH, many BPH models have been developed at home and abroad. This article presents a comprehensive analysis of the categories and characteristics of BPH drug evaluation models, highlighting the application value of each model, to provide a theoretical basis for the development of BPH drugs.

  18. Two-Fluid Mathematical Models for Blood Flow in Stenosed Arteries: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Sankar DS

    2009-01-01

    Full Text Available The pulsatile flow of blood through stenosed arteries is analyzed by assuming the blood as a two-fluid model with the suspension of all the erythrocytes in the core region as a non-Newtonian fluid and the plasma in the peripheral layer as a Newtonian fluid. The non-Newtonian fluid in the core region of the artery is assumed as a (i Herschel-Bulkley fluid and (ii Casson fluid. Perturbation method is used to solve the resulting system of non-linear partial differential equations. Expressions for various flow quantities are obtained for the two-fluid Casson model. Expressions of the flow quantities obtained by Sankar and Lee (2006 for the two-fluid Herschel-Bulkley model are used to get the data for comparison. It is found that the plug flow velocity and velocity distribution of the two-fluid Casson model are considerably higher than those of the two-fluid Herschel-Bulkley model. It is also observed that the pressure drop, plug core radius, wall shear stress and the resistance to flow are significantly very low for the two-fluid Casson model than those of the two-fluid Herschel-Bulkley model. Hence, the two-fluid Casson model would be more useful than the two-fluid Herschel-Bulkley model to analyze the blood flow through stenosed arteries.

  19. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  20. A Quantitative Comparative Study of Blended and Traditional Models in the Secondary Advanced Placement Statistics Classroom

    Science.gov (United States)

    Owens, Susan T.

    2017-01-01

    Technology is becoming an integral tool in the classroom and can make a positive impact on how the students learn. This quantitative comparative research study examined gender-based differences among secondary Advanced Placement (AP) Statistic students comparing Educational Testing Service (ETS) College Board AP Statistic examination scores…