WorldWideScience

Sample records for metrazol models comparing

  1. ANTICONVULSANT AND ANTIEPILEPTIC ACTIONS OF 2-DEOXY-DGLUCOSE IN EPILEPSY MODELS

    Science.gov (United States)

    Stafstrom, Carl E.; Ockuly, Jeffrey C.; Murphree, Lauren; Valley, Matthew T.; Roopra, Avtar; Sutula, Thomas P.

    2009-01-01

    Objective Conventional anticonvulsants reduce neuronal excitability through effects on ion channels and synaptic function. Anticonvulsant mechanisms of the ketogenic diet remain incompletely understood. Since carbohydrates are restricted in patients on the ketogenic diet, we evaluated the effects of limiting carbohydrate availability by reducing glycolysis using the glycolytic inhibitor 2-deoxy-D-glucose (2DG) in experimental models of seizures and epilepsy. Methods Acute anticonvulsant actions of 2DG were assessed in vitro in rat hippocampal slices perfused with 7.5mM [K+]o, 4-aminopyridine (4-AP), or bicuculline and in vivo against seizures evoked by 6 Hz stimulation in mice, audiogenic stimulation in Fring’s mice, and maximal electroshock and subcutaneous Metrazol in rats. Chronic antiepileptic effects of 2DG were evaluated in rats kindled from olfactory bulb or perforant path. Results 2DG (10mM) reduced interictal epileptiform bursts induced by high [K+]o, 4-AP and bicuculline, and electrographic seizures induced by high [K+]o in CA3 of hippocampus. 2DG reduced seizures evoked by 6 Hz stimulation in mice (ED50 = 79.7 mg/kg) and audiogenic stimulation in Fring’s mice (ED50 = 206.4 mg/kg). 2DG exerted chronic antiepileptic action by increasing afterdischarge thresholds in perforant path (but not olfactory bulb) kindling and caused a 2-fold slowing in progression of kindled seizures at both stimulation sites. 2DG did not protect against maximal electroshock or Metrazol seizures. Interpretation The glycolytic inhibitor 2DG exerts acute anticonvulsant and chronic antiepileptic actions and has a novel pattern of effectiveness in preclinical screening models. These results identify metabolic regulation as a potential therapeutic target for seizure suppression and modification of epileptogenesis. PMID:19399874

  2. Beta-Blockers: An Abstracted Bibliography.

    Science.gov (United States)

    1989-04-04

    B. S. TITLE: Ocular Effects of Acebutolol and Propranolol. REFERENCE: Metabolic, Pediatric an( Systemic Ophthalmology, Vol. 4, pp. 87-88, DRUGS...PROCEDURES-: Tested for drug influence on barbiturate hypnosis, anialgesia, anticonvulsant effect, metrazol convulsions, strychnine convulsions, audiogenic

  3. A single episode of neonatal seizures alters the cerebellum of immature rats

    Czech Academy of Sciences Publication Activity Database

    Lomoio, S.; Necchi, D.; Mareš, Vladislav; Scherini, E.

    2011-01-01

    Roč. 93, č. 1 (2011), s. 17-24 ISSN 0920-1211 Institutional research plan: CEZ:AV0Z50110509 Keywords : metrazol seizures * cerebellum * Purkinje cells * GluR2/3 * GLT1 Subject RIV: FH - Neurology Impact factor: 2.290, year: 2011

  4. Effect of epileptogenic agents on the incorporation of /sup 3/H-glycine into proteins in the cat's cerebral cortex

    Energy Technology Data Exchange (ETDEWEB)

    Rojik, I.; Feher, O.

    1982-06-01

    Filter paper strips soaked in /sup 3/H-glycine solution were applied to acoustic cortex of cats, anaesthetized with Nembutal and pretreated with epileptogenic agents (Metrazol, G-penicillin, and 3-amino-pyridine) and cycloheximide. The untreated contralateral hemisphere served as control. After 1 h incubation, both cortical samples were excised simultaneously and fixed in Bouin solution for autoradiography. Incorporation was blocked by cycloheximide. There was no glycine incorporation on the penicillin-treated side, while pyramidal cells were intensively labelled in layers II-V of the mirror focus. 3-Aminopyridine produced the same result. Metrazol as convulsant proved to be far weaker than the previous two. The intensity of incorporation was significantly more intensive in the mirror focus than in the primary one. Penicillin and 3-aminopyridine, while provoking cortical seizures, seem to inhibit glycine incorporation into a neuron-specific, function-dependent protein contained by the labelled cells in the autoradiogram.

  5. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  6. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  7. COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS

    OpenAIRE

    Sandeep Kaur*

    2017-01-01

    No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need

  8. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  9. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  10. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  11. Wellness Model of Supervision: A Comparative Analysis

    Science.gov (United States)

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  12. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  13. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  14. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  15. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  16. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  18. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  19. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  20. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    Science.gov (United States)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  1. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    Science.gov (United States)

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Comparing the line broadened quasilinear model to Vlasov code

    International Nuclear Information System (INIS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-01-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations

  3. Comparing the line broadened quasilinear model to Vlasov code

    Energy Technology Data Exchange (ETDEWEB)

    Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  4. Comparing the line broadened quasilinear model to Vlasov code

    Science.gov (United States)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  5. Comparative evaluation of life cycle assessment models for solid waste management

    International Nuclear Information System (INIS)

    Winkler, Joerg; Bilitewski, Bernd

    2007-01-01

    This publication compares a selection of six different models developed in Europe and America by research organisations, industry associations and governmental institutions. The comparison of the models reveals the variations in the results and the differences in the conclusions of an LCA study done with these models. The models are compared by modelling a specific case - the waste management system of Dresden, Germany - with each model and an in-detail comparison of the life cycle inventory results. Moreover, a life cycle impact assessment shows if the LCA results of each model allows for comparable and consecutive conclusions, which do not contradict the conclusions derived from the other models' results. Furthermore, the influence of different level of detail in the life cycle inventory of the life cycle assessment is demonstrated. The model comparison revealed that the variations in the LCA results calculated by the models for the case show high variations and are not negligible. In some cases the high variations in results lead to contradictory conclusions concerning the environmental performance of the waste management processes. The static, linear modelling approach chosen by all models analysed is inappropriate for reflecting actual conditions. Moreover, it was found that although the models' approach to LCA is comparable on a general level, the level of detail implemented in the software tools is very different

  6. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  7. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  8. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  9. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Science.gov (United States)

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  10. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  11. Comparative Assessment of Nonlocal Continuum Solvent Models Exhibiting Overscreening

    Directory of Open Access Journals (Sweden)

    Ren Baihua

    2017-01-01

    Full Text Available Nonlocal continua have been proposed to offer a more realistic model for the electrostatic response of solutions such as the electrolyte solvents prominent in biology and electrochemistry. In this work, we review three nonlocal models based on the Landau-Ginzburg framework which have been proposed but not directly compared previously, due to different expressions of the nonlocal constitutive relationship. To understand the relationships between these models and the underlying physical insights from which they are derive, we situate these models into a single, unified Landau-Ginzburg framework. One of the models offers the capacity to interpret how temperature changes affect dielectric response, and we note that the variations with temperature are qualitatively reasonable even though predictions at ambient temperatures are not quantitatively in agreement with experiment. Two of these models correctly reproduce overscreening (oscillations between positive and negative polarization charge densities, and we observe small differences between them when we simulate the potential between parallel plates held at constant potential. These computations require reformulating the two models as coupled systems of local partial differential equations (PDEs, and we use spectral methods to discretize both problems. We propose further assessments to discriminate between the models, particularly in regards to establishing boundary conditions and comparing to explicit-solvent molecular dynamics simulations.

  12. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  13. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  14. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...

  15. Sobre a leucotomia pré-frontal de Egas Moniz

    Directory of Open Access Journals (Sweden)

    Mario Yahn

    1946-09-01

    and Watts's prefrontal lobotomy, Egas Moniz's leucotomy had failed. One in this group had social recovery after lobotomy and the two others greatly improved, what performs 10 percent of good results over the total number of operated patients. The results in the 15 schizophrenics whom only the prefrontal lobotomy was performed on are following: 1 death, 2 complete or social recovery (14 percent .and 12 failures. We fell to be of value to compare the incidence of 10 per cent of patients influenced by the Egas Moniz's leucotomy plus Freeman and Watts's operation to that of 14 per cent influenced by the Freeman and Watts's prefrontal lobotomy alone and to that of 18 per cent influenced by the Egas Mofliz's leucotomy alone. The differences in these results may be explained by the unequal deal of the material among the various groups, which warrants only a relative value to the conclusions based on the comparison of those data. One of the deaths occurred at the 22nd hour after the operation ami necropsy revealed cerebral hemorrhage. The other death was due to kystic purulent meningitis, as proved by necropsy. This paper affords us an opportunity to show a table comparing the results obtained in the treatment of schizophrenia by several methods as metrazol, electroshock, insulin and cerebral leucotomy. This table (quadro 1 has been presented at the Congress of Neurology and Psychiatry, met in Buenos Aires at November, 1944. The table discloses the kind of our material, composed mainly by chronic patients, and points out psychosurgery's possibilities too. As may be noted, 155 of the 835 schizophrenic patients were less than 13 months ill; an incidence of 18,5 per cent of acute and subacute cases results. It is equally important to note that the illness time of 190 (or 23 per cent is unknown, what is explained by the fact that a great number of our inmates are abandoned indigents, met in the streets and sent to this State Hospital by the police, thus causing us a great lack of

  16. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  17. Comparing Realistic Subthalamic Nucleus Neuron Models

    Science.gov (United States)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  18. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    Science.gov (United States)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  19. Comparing several boson mappings with the shell model

    International Nuclear Information System (INIS)

    Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.

    1990-01-01

    Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases

  20. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  1. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  2. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  3. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  4. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  5. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  6. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Comparing live and remote models in eating conformity research.

    Science.gov (United States)

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  8. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  9. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  10. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  11. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  12. Comparative study of boron transport models in NRC Thermal-Hydraulic Code Trace

    Energy Technology Data Exchange (ETDEWEB)

    Olmo-Juan, Nicolás; Barrachina, Teresa; Miró, Rafael; Verdú, Gumersindo; Pereira, Claubia, E-mail: nioljua@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es, E-mail: claubia@nuclear.ufmg.br [Institute for Industrial, Radiophysical and Environmental Safety (ISIRYM). Universitat Politècnica de València (Spain); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Recently, the interest in the study of various types of transients involving changes in the boron concentration inside the reactor, has led to an increase in the interest of developing and studying new models and tools that allow a correct study of boron transport. Therefore, a significant variety of different boron transport models and spatial difference schemes are available in the thermal-hydraulic codes, as TRACE. According to this interest, in this work it will be compared the results obtained using the different boron transport models implemented in the NRC thermal-hydraulic code TRACE. To do this, a set of models have been created using the different options and configurations that could have influence in boron transport. These models allow to reproduce a simple event of filling or emptying the boron concentration in a long pipe. Moreover, with the aim to compare the differences obtained when one-dimensional or three-dimensional components are chosen, it has modeled many different cases using only pipe components or a mix of pipe and vessel components. In addition, the influence of the void fraction in the boron transport has been studied and compared under close conditions to BWR commercial model. A final collection of the different cases and boron transport models are compared between them and those corresponding to the analytical solution provided by the Burgers equation. From this comparison, important conclusions are drawn that will be the basis of modeling the boron transport in TRACE adequately. (author)

  13. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  14. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  15. Towards consensus in comparative chemical characterization modeling for LCIA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Bachmann, Till; Huijbregts, Mark

    2006-01-01

    work within, for instance, the OECD, and guidance from a series of expert workshops held between 2002 and 2005, preliminary guidelines focusing on chemical fate, and human and ecotoxic effects were established. For further elaboration of the fate-, exposure- and effect-sides of the modeling, six models...... by the Task Force and the model providers. While the compared models and their differences are important tools to further advance LCA science, the consensus model is intended to provide a generally agreed and scientifically sound method to calculate consistent characterization factors for use in LCA practice...... and to be the basis of the “recommended practice” for calculation of characterization factors for chemicals under authority of the UNEP/SETAC Life Cycle Initiative....

  16. Comparative modeling of InP solar cell structures

    Science.gov (United States)

    Jain, R. K.; Weinberg, I.; Flood, D. J.

    1991-01-01

    The comparative modeling of p(+)n and n(+)p indium phosphide solar cell structures is studied using a numerical program PC-1D. The optimal design study has predicted that the p(+)n structure offers improved cell efficiencies as compared to n(+)p structure, due to higher open-circuit voltage. The various cell material and process parameters to achieve the maximum cell efficiencies are reported. The effect of some of the cell parameters on InP cell I-V characteristics was studied. The available radiation resistance data on n(+)p and p(+)p InP solar cells are also critically discussed.

  17. comparative analysis of some existing kinetic models with proposed

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.

  18. Comparative analysis of existing models for power-grid synchronization

    International Nuclear Information System (INIS)

    Nishikawa, Takashi; Motter, Adilson E

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations. (paper)

  19. Comparative assessment of condensation models for horizontal tubes

    International Nuclear Information System (INIS)

    Schaffrath, A.; Kruessenberg, A.K.; Lischke, W.; Gocht, U.; Fjodorow, A.

    1999-01-01

    The condensation in horizontal tubes plays an important role e.g. for the determination of the operation mode of horizontal steam generators of VVER reactors or passive safety systems for the next generation of nuclear power plants. Two different approaches (HOTKON and KONWAR) for modeling this process have been undertaken by Forschungszentrum Juelich (FZJ) and University for Applied Sciences Zittau/Goerlitz (HTWS) and implemented into the 1D-thermohydraulic code ATHLET, which is developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH for the analysis of anticipated and abnormal transients in light water reactors. Although the improvements of the condensation models are developed for different applications (VVER steam generators - emergency condenser of the SWR1000) with strongly different operation conditions (e.g. the temperature difference over the tube wall in HORUS is up to 30 K and in NOKO up to 250 K, the heat flux density in HORUS is up to 40 kW/m 2 and in NOKO up to 1 GW/m 2 ) both models are now compared and assessed by Forschungszentrum Rossendorf FZR e.V. Therefore, post test calculations of selected HORUS experiments were performed with ATHLET/KONWAR and compared to existing ATHLET and ATHLET/HOTKON calculations of HTWS. It can be seen that the calculations with the extension KONWAR as well as HOTKON improve significantly the agreement between computational and experimental data. (orig.) [de

  20. Comparative Modelling of the Spectra of Cool Giants

    Science.gov (United States)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  1. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  2. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  3. Comparing the Applicability of Commonly Used Hydrological Ecosystem Services Models for Integrated Decision-Support

    Directory of Open Access Journals (Sweden)

    Anna Lüke

    2018-01-01

    Full Text Available Different simulation models are used in science and practice in order to incorporate hydrological ecosystem services in decision-making processes. This contribution compares three simulation models, the Soil and Water Assessment Tool, a traditional hydrological model and two ecosystem services models, the Integrated Valuation of Ecosystem Services and Trade-offs model and the Resource Investment Optimization System model. The three models are compared on a theoretical and conceptual basis as well in a comparative case study application. The application of the models to a study area in Nicaragua reveals that a practical benefit to apply these models for different questions in decision-making generally exists. However, modelling of hydrological ecosystem services is associated with a high application effort and requires input data that may not always be available. The degree of detail in temporal and spatial variability in ecosystem service provision is higher when using the Soil and Water Assessment Tool compared to the two ecosystem service models. In contrast, the ecosystem service models have lower requirements on input data and process knowledge. A relationship between service provision and beneficiaries is readily produced and can be visualized as a model output. The visualization is especially useful for a practical decision-making context.

  4. Elastic models: a comparative study applied to retinal images.

    Science.gov (United States)

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  5. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago.

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E; Pellissier, Loïc

    2018-03-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns.

  6. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E.

    2018-01-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns. PMID:29657753

  7. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  9. Comparing of four IRT models when analyzing two tests for inductive reasoning

    NARCIS (Netherlands)

    de Koning, E.; Sijtsma, K.; Hamers, J.H.M.

    2002-01-01

    This article discusses the use of the nonparametric IRT Mokken models of monotone homogeneity and double monotonicity and the parametric Rasch and Verhelst models for the analysis of binary test data. First, the four IRT models are discussed and compared at the theoretical level, and for each model,

  10. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  11. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    Science.gov (United States)

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  12. Comparative analysis of coupled creep-damage model implementations and application

    International Nuclear Information System (INIS)

    Bhandari, S.; Feral, X.; Bergheau, J.M.; Mottet, G.; Dupas, P.; Nicolas, L.

    1998-01-01

    Creep rupture of a reactor pressure vessel in a severe accident occurs after complex load and temperature histories leading to interactions between creep deformations, stress relaxation, material damaging and plastic instability. The concepts of continuous damage introduced by Kachanov and Robotnov allow to formulate models coupling elasto-visco-plasticity and damage. However, the integration of such models in a finite element code creates some difficulties related to the strong non-linearity of the constitutive equations. It was feared that different methods of implementation of such a model might lead to different results which, consequently, might limit the application and usefulness of such a model. The Commissariat a l'Energie Atomique (CEA), Electricite de France (EDF) and Framasoft (FRA) have worked out numerical solutions to implement such a model in respectively CASTEM 2000, ASTER and SYSTUS codes. A ''benchmark'' was set up, chosen on the basis of a cylinder studied in the programme ''RUPTHER''. The aim of this paper is not to enter into the numerical details of the implementation of the model, but to present the results of the comparative study made using the three codes mentioned above, on a case of engineering interest. The results of the coupled model will also be compared to an uncoupled model to evaluate differences one can obtain between a simple uncoupled model and a more sophisticated coupled model. The main conclusion drawn from this study is that the different numerical implementations used for the coupled damage-visco-plasticity model give quite consistent results. The numerical difficulty inherent to the integration of the strongly non-linear constitutive equations have been resolved using Runge-Kutta or mid-point rule. The usefulness of the coupled model comes from the fact the uncoupled model leads to too conservative results, at least in the example treated and in particular for the uncoupled analysis under the hypothesis of the small

  13. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  14. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  15. Comparing the engineering program feeders from SiF and convention models

    Science.gov (United States)

    Roongruangsri, Warawaran; Moonpa, Niwat; Vuthijumnonk, Janyawat; Sangsuwan, Kampanart

    2018-01-01

    This research aims to compare the relationship between two types of engineering program feeder models within the technical education systems of Rajamangala University of Technology Lanna (RMUTL), Chiangmai, Thailand. To illustrate, the paper refers to two typologies of feeder models, which are the convention and the school in factory (SiF) models. The new SiF model is developed through a collaborative educational process between the sectors of industry, government and academia, using work-integrated learning. The research methodology were use to compared features of the the SiF model with conventional models in terms of learning outcome, funding budget for the study, the advantages and disadvantages from the point of view of students, professors, the university, government and industrial partners. The results of this research indicate that the developed SiF feeder model is the most pertinent ones as it meet the requirements of the university, the government and the industry. The SiF feeder model showed the ability to yield positive learning outcomes with low expenditures per student for both the family and the university. In parallel, the sharing of knowledge between university and industry became increasingly important in the process, which resulted in the improvement of industrial skills for professors and an increase in industrial based research for the university. The SiF feeder model meets its demand of public policy in supporting a skilled workforce for the industry, which could be an effective tool for the triple helix educational model of Thailand.

  16. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  17. Comparative dynamic analysis of the full Grossman model.

    Science.gov (United States)

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  18. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    Science.gov (United States)

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  19. The chemical induction of seizures in psychiatric therapy: were flurothyl (indoklon) and pentylenetetrazol (metrazol) abandoned prematurely?

    Science.gov (United States)

    Cooper, Kathryn; Fink, Max

    2014-10-01

    Camphor-induced and pentylenetetrazol-induced brain seizures were first used to relieve psychiatric illnesses in 1934. Electrical inductions (electroconvulsive therapy, ECT) followed in 1938. These were easier and less expensive to administer and quickly became the main treatment method. In 1957, seizure induction with the inhalant anesthetic flurothyl was tested and found to be clinically effective.For many decades, complaints of memory loss have stigmatized and inhibited ECT use. Many variations of electricity in form, electrode placement, dosing, and stimulation method offered some relief, but complaints still limit its use. The experience with chemical inductions of seizures was reviewed based on searches for reports of each agent in Medline and in the archival files of original studies by the early investigators. Camphor injections were inefficient and were rapidly replaced by pentylenetetrazol. These were effective but difficult to administer. Flurothyl inhalation-induced seizures were as clinically effective as electrical inductions with lesser effects on memory functions. Flurothyl inductions were discarded because of the persistence of the ethereal aroma and the fears induced in the professional staff that they might seize. Persistent complaints of memory loss plague electricity induced seizures. Flurothyl induced seizures are clinically as effective without the memory effects associated with electricity. Reexamination of seizure inductions using flurothyl in modern anesthesia facilities is encouraged to relieve medication-resistant patients with mood disorders and catatonia.

  20. New tips for structure prediction by comparative modeling

    OpenAIRE

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence iden...

  1. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-01-01

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin’s lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (V x (%)), a two-variable NTCP model including V 30 (%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (V x (cc)) was analyzed, an NTCP model based on 3 variables including V 30 (cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760–0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an

  2. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  3. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  4. Comparing droplet activation parameterisations against adiabatic parcel models using a novel inverse modelling framework

    Science.gov (United States)

    Partridge, Daniel; Morales, Ricardo; Stier, Philip

    2015-04-01

    Many previous studies have compared droplet activation parameterisations against adiabatic parcel models (e.g. Ghan et al., 2001). However, these have often involved comparisons for a limited number of parameter combinations based upon certain aerosol regimes. Recent studies (Morales et al., 2014) have used wider ranges when evaluating their parameterisations, however, no study has explored the full possible multi-dimensional parameter space that would be experienced by droplet activations within a global climate model (GCM). It is important to be able to efficiently highlight regions of the entire multi-dimensional parameter space in which we can expect the largest discrepancy between parameterisation and cloud parcel models in order to ascertain which regions simulated by a GCM can be expected to be a less accurate representation of the process of cloud droplet activation. This study provides a new, efficient, inverse modelling framework for comparing droplet activation parameterisations to more complex cloud parcel models. To achieve this we couple a Markov Chain Monte Carlo algorithm (Partridge et al., 2012) to two independent adiabatic cloud parcel models and four droplet activation parameterisations. This framework is computationally faster than employing a brute force Monte Carlo simulation, and allows us to transparently highlight which parameterisation provides the closest representation across all aerosol physiochemical and meteorological environments. The parameterisations are demonstrated to perform well for a large proportion of possible parameter combinations, however, for certain key parameters; most notably the vertical velocity and accumulation mode aerosol concentration, large discrepancies are highlighted. These discrepancies correspond for parameter combinations that result in very high/low simulated values of maximum supersaturation. By identifying parameter interactions or regimes within the multi-dimensional parameter space we hope to guide

  5. Comparative study of computational model for pipe whip analysis

    International Nuclear Information System (INIS)

    Koh, Sugoong; Lee, Young-Shin

    1993-01-01

    Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)

  6. A comparative review of radiation-induced cancer risk models

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Hee; Kim, Ju Youl [FNC Technology Co., Ltd., Yongin (Korea, Republic of); Han, Seok Jung [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    With the need for a domestic level 3 probabilistic safety assessment (PSA), it is essential to develop a Korea-specific code. Health effect assessments study radiation-induced impacts; in particular, long-term health effects are evaluated in terms of cancer risk. The objective of this study was to analyze the latest cancer risk models developed by foreign organizations and to compare the methodology of how they were developed. This paper also provides suggestions regarding the development of Korean cancer risk models. A review of cancer risk models was carried out targeting the latest models: the NUREG model (1993), the BEIR VII model (2006), the UNSCEAR model (2006), the ICRP 103 model (2007), and the U.S. EPA model (2011). The methodology of how each model was developed is explained, and the cancer sites, dose and dose rate effectiveness factor (DDREF) and mathematical models are also described in the sections presenting differences among the models. The NUREG model was developed by assuming that the risk was proportional to the risk coefficient and dose, while the BEIR VII, UNSCEAR, ICRP, and U.S. EPA models were derived from epidemiological data, principally from Japanese atomic bomb survivors. The risk coefficient does not consider individual characteristics, as the values were calculated in terms of population-averaged cancer risk per unit dose. However, the models derived by epidemiological data are a function of sex, exposure age, and attained age of the exposed individual. Moreover, the methodologies can be used to apply the latest epidemiological data. Therefore, methodologies using epidemiological data should be considered first for developing a Korean cancer risk model, and the cancer sites and DDREF should also be determined based on Korea-specific studies. This review can be used as a basis for developing a Korean cancer risk model in the future.

  7. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Science.gov (United States)

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  8. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  9. A comparative analysis of several vehicle emission models for road freight transportation

    NARCIS (Netherlands)

    Demir, E.; Bektas, T.; Laporte, G.

    2011-01-01

    Reducing greenhouse gas emissions in freight transportation requires using appropriate emission models in the planning process. This paper reviews and numerically compares several available freight transportation vehicle emission models and also considers their outputs in relations to field studies.

  10. What can be learned from computer modeling? Comparing expository and modeling approaches to teaching dynamic systems behavior

    NARCIS (Netherlands)

    van Borkulo, S.P.; van Joolingen, W.R.; Savelsbergh, E.R.; de Jong, T.

    2012-01-01

    Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment

  11. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  12. Psychobiological model of temperament and character: Validation and cross-cultural comparations

    Directory of Open Access Journals (Sweden)

    Džamonja-Ignjatović Tamara

    2005-01-01

    Full Text Available The paper presents research results regarding Psychobiological model of personality by Robert Cloninger. The primary research goal was to test the new TCI-5 inventory and compare our results with US normative data. We also analyzed the factor structure of the model and the reliability of basic TCI-5 scales and sub-scales. The sample consisted of 473 subjects from the normal population, age range between 18-50 years. Results showed significant differences between Serbian and American samples. Compared to the American sample, Novelty seeking was higher in the Serbian sample, while Persistence Self-directedness and Cooperativeness were lower. For the most part results of the present study confirmed a seven factor structure model although some sub-scales did not coincide with basic dimensions as predicted by the theoretical model. Therefore certain theoretical revisions of the model are required in order to fit in the empirical findings. Similarly, the discrepancy between the theoretical and empirical was also noticed regarding the reliability of TCI-5 scales. They also need to be re-examined. Thus the results of the study showed satisfactory reliability of Persistence (.90, Self-directedness (.89 and Harm avoidance (.87, but low reliability of the Novelty seeking (.78, Reward dependence (.79 and Self-transcendence (.78.

  13. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  14. Assessment and Challenges of Ligand Docking into Comparative Models of G-Protein Coupled Receptors

    DEFF Research Database (Denmark)

    Nguyen, E.D.; Meiler, J.; Norn, C.

    2013-01-01

    screening and to design and optimize drug candidates. However, low sequence identity between receptors, conformational flexibility, and chemical diversity of ligands present an enormous challenge to molecular modeling approaches. It is our hypothesis that rapid Monte-Carlo sampling of protein backbone...... extracellular loop. Furthermore, these models are consistently correlated with low Rosetta energy score. To predict their binding modes, ligand conformers of the 14 ligands co-crystalized with the GPCRs were docked against the top ranked comparative models. In contrast to the comparative models themselves...

  15. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  16. A comparative study of two fast nonlinear free-surface water wave models

    DEFF Research Database (Denmark)

    Ducrozet, Guillaume; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    simply directly solves the three-dimensional problem. Both models have been well validated on standard test cases and shown to exhibit attractive convergence properties and an optimal scaling of the computational effort with increasing problem size. These two models are compared for solution of a typical...... used in OceanWave3D, the closer the results come to the HOS model....

  17. Dinucleotide controlled null models for comparative RNA gene prediction.

    Science.gov (United States)

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz

  18. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  19. Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach

    Directory of Open Access Journals (Sweden)

    Dan Zhou

    2015-01-01

    Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.

  20. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  1. A Comparative Study of Neural Networks and Fuzzy Systems in Modeling of a Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Metin Demirtas

    2011-07-01

    Full Text Available The aim of this paper is to compare the neural networks and fuzzy modeling approaches on a nonlinear system. We have taken Permanent Magnet Brushless Direct Current (PMBDC motor data and have generated models using both approaches. The predictive performance of both methods was compared on the data set for model configurations. The paper describes the results of these tests and discusses the effects of changing model parameters on predictive and practical performance. Modeling sensitivity was used to compare for two methods.

  2. Comparing i-Tree modeled ozone deposition with field measurements in a periurban Mediterranean forest

    Science.gov (United States)

    A. Morani; D. Nowak; S. Hirabayashi; G. Guidolotti; M. Medori; V. Muzzini; S. Fares; G. Scarascia Mugnozza; C. Calfapietra

    2014-01-01

    Ozone flux estimates from the i-Tree model were compared with ozone flux measurements using the Eddy Covariance technique in a periurban Mediterranean forest near Rome (Castelporziano). For the first time i-Tree model outputs were compared with field measurements in relation to dry deposition estimates. Results showed generally a...

  3. Comparative analysis of Bouc–Wen and Jiles–Atherton models under symmetric excitations

    Energy Technology Data Exchange (ETDEWEB)

    Laudani, Antonino, E-mail: alaudani@uniroma3.it; Fulginei, Francesco Riganti; Salvini, Alessandro

    2014-02-15

    The aim of the present paper is to validate the Bouc–Wen (BW) hysteresis model when it is applied to predict dynamic ferromagnetic loops. Indeed, although the Bouc–Wen model has had an increasing interest in last few years, it is usually adopted in mechanical and structural systems and very rarely for magnetic applications. Thus, for addressing this goal the Bouc–Wen model is compared with the dynamic Jiles–Atherton model that, instead, was ideated exactly for simulating magnetic hysteresis. The comparative analysis has involved saturated and symmetric hysteresis loops in ferromagnetic materials. In addition in order to identify the Bouc–Wen parameters a very effective recent heuristic, called Metric-Topological and Evolutionary Optimization (MeTEO) has been utilized. It is based on a hybridization of three meta-heuristics: the Flock-of-Starlings Optimization, the Particle Swarm Optimization and the Bacterial Chemotaxis Algorithm. Thanks to the specific properties of these heuristic, MeTEO allow us to achieve effective identification of such kind of models. Several hysteresis loops have been utilized for final validation tests with the aim to investigate if the BW model can follow the different hysteresis behaviors of both static (quasi-static) and dynamic cases.

  4. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  5. Comparing supply-side specifications in models of global agriculture and the food system

    NARCIS (Netherlands)

    Robinson, S.; Meijl, van J.C.M.; Willenbockel, D.; Valin, H.; Fujimori, S.; Masui, T.; Sands, R.; Wise, M.; Calvin, K.V.; Mason d'Croz, D.; Tabeau, A.A.; Kavallari, A.; Schmitz, C.; Dietrich, J.P.; Lampe, von M.

    2014-01-01

    This article compares the theoretical and functional specification of production in partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two model families differ in their scope—partial

  6. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  7. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats.

    Science.gov (United States)

    Pepple, Kathryn L; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N

    2015-12-01

    Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2-crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2-crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation.

  8. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  9. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown...

  10. A comparative study of the constitutive models for silicon carbide

    Science.gov (United States)

    Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra

    2001-06-01

    Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.

  11. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  12. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  13. The separatrix response of diverted TCV plasmas compared to the CREATE-L model

    International Nuclear Information System (INIS)

    Vyas, P.; Lister, J.B.; Villone, F.; Albanese, R.

    1997-11-01

    The response of Ohmic, single-null diverted, non-centred plasmas in TCV to poloidal field coil stimulation has been compared to the linear CREATE-L MHD equilibrium response model. The closed loop responses of directly measured quantities, reconstructed parameters, and the reconstructed plasma contour were all examined. Provided that the plasma position and shape perturbation were small enough for the linearity assumption to hold, the model-experiment agreement was good. For some stimulations the open loop vertical position instability growth rate changed significantly, illustrating the limitations of a linear model. A different model was developed with the assumption that the flux at the plasma boundary is frozen and was also compared with experimental results. It proved not to be as reliable as the CREATE-L model for some simulation parameters showing that the experiments were able to discriminate between different plasma response models. The closed loop response was also found to be sensitive to changes in the modelled plasma shape. It was not possible to invalidate the CREATE-L model despite the extensive range of responses excited by the experiments. (author) figs., tabs., 5 refs

  14. Comparing pharmacophore models derived from crystallography and NMR ensembles

    Science.gov (United States)

    Ghanakota, Phani; Carlson, Heather A.

    2017-11-01

    NMR and X-ray crystallography are the two most widely used methods for determining protein structures. Our previous study examining NMR versus X-Ray sources of protein conformations showed improved performance with NMR structures when used in our Multiple Protein Structures (MPS) method for receptor-based pharmacophores (Damm, Carlson, J Am Chem Soc 129:8225-8235, 2007). However, that work was based on a single test case, HIV-1 protease, because of the rich data available for that system. New data for more systems are available now, which calls for further examination of the effect of different sources of protein conformations. The MPS technique was applied to Growth factor receptor bound protein 2 (Grb2), Src SH2 homology domain (Src-SH2), FK506-binding protein 1A (FKBP12), and Peroxisome proliferator-activated receptor-γ (PPAR-γ). Pharmacophore models from both crystal and NMR ensembles were able to discriminate between high-affinity, low-affinity, and decoy molecules. As we found in our original study, NMR models showed optimal performance when all elements were used. The crystal models had more pharmacophore elements compared to their NMR counterparts. The crystal-based models exhibited optimum performance only when pharmacophore elements were dropped. This supports our assertion that the higher flexibility in NMR ensembles helps focus the models on the most essential interactions with the protein. Our studies suggest that the "extra" pharmacophore elements seen at the periphery in X-ray models arise as a result of decreased protein flexibility and make very little contribution to model performance.

  15. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  16. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  17. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  18. State regulation of nuclear sector: comparative study of Argentina and Brazil models

    International Nuclear Information System (INIS)

    Monteiro Filho, Joselio Silveira

    2004-08-01

    This research presents a comparative assessment of the regulation models of the nuclear sector in Argentina - under the responsibility of the Autoridad Regulatoria Nuclear (ARN), and Brazil - under the responsibility of Comissao Nacional de Energia Nuclear (CNEN), trying to identify which model is more adequate aiming the safe use of nuclear energy. Due to the methodology adopted, the theoretical framework resulted in criteria of analysis that corresponds to the characteristics of the Brazilian regulatory agencies created for other economic sector during the State reform staring in the middle of the nineties. Later, these criteria of analysis were used as comparison patterns between the regulation models of the nuclear sectors of Argentina and Brazil. The comparative assessment showed that the regulatory structure of the nuclear sector in Argentina seems to be more adequate, concerning the safe use of nuclear energy, than the model adopted in Brazil by CNEN, because its incorporates the criteria of functional, institutional and financial independence, competence definitions, technical excellence and transparency, indispensable to the development of its functions with autonomy, ethics, exemption and agility. (author)

  19. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  20. Experience gained with the application of the MODIS diffusion model compared with the ATMOS Gauss-function-based model

    International Nuclear Information System (INIS)

    Mueller, A.

    1985-01-01

    The advantage of the Gauss-function-based models doubtlessly consists in their proven propagation parameter sets and empirical stack plume rise formulas and in their easy matchability and handability. However, grid models based on trace matter transport equation are more convincing concerning their fundamental principle. Grid models of the MODIS type are to acquire a practical applicability comparable to Gauss models by developing techniques allowing to consider the vertical self-movement of the plumes in grid models and to secure improved diffusion co-efficient determination. (orig./PW) [de

  1. Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations

    Science.gov (United States)

    Tselioudis, G.; Bauer, M.; Rossow, W.

    2009-04-01

    Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.

  2. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  3. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    Science.gov (United States)

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  4. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  5. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    International Nuclear Information System (INIS)

    Pan, Dongqing; Chien Jen, Tien; Li, Tao; Yuan, Chris

    2014-01-01

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired

  6. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Dongqing; Chien Jen, Tien [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53201 (United States); Li, Tao [School of Mechanical Engineering, Dalian University of Technology, Dalian 116024 (China); Yuan, Chris, E-mail: cyuan@uwm.edu [Department of Mechanical Engineering, University of Wisconsin-Milwaukee, 3200 North Cramer Street, Milwaukee, Wisconsin 53211 (United States)

    2014-01-15

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domain with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.

  7. Comparing ESC and iPSC—Based Models for Human Genetic Disorders

    Directory of Open Access Journals (Sweden)

    Tomer Halevy

    2014-10-01

    Full Text Available Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs from patients’ somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn’t be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  8. Comparing ESC and iPSC-Based Models for Human Genetic Disorders.

    Science.gov (United States)

    Halevy, Tomer; Urbach, Achia

    2014-10-24

    Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs) from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs) from patients' somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn't be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  9. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  10. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  11. Alfven waves in the auroral ionosphere: A numerical model compared with measurements

    International Nuclear Information System (INIS)

    Knudsen, D.J.; Kelley, M.C.; Vickrey, J.F.

    1992-01-01

    The authors solve a linear numerical model of Alfven waves reflecting from the high-latitude ionosphere, both to better understanding the role of the ionosphere in the magnetosphere/ionosphere coupling process and to compare model results with in situ measurements. They use the model to compute the frequency-dependent amplitude and phase relations between the meridional electric and the zonal magnetic fields due to Alfven waves. These relations are compared with measurements taken by an auroral sounding rocket flow in the morningside oval and by the HILAT satellite traversing the oval at local noon. The sounding rocket's trajectory was mostly parallel to the auroral oval, and is measured enhanced fluctuating field energy in regions of electron precipitation. The rocket-measured phase data are in excellent agreement with the Alfven wave model, and the relation between the modeled and the measured by HILAT are related by the height-integrated Pedersen conductivity Σ p , indicating that the measured field fluctuations were due mainly to structured field-aligned current systems. A reason for the relative lack of Alfven wave energy in the HILAT measurements could be the fact that the satellite traveled mostly perpendicular to the oval and therefore quickly traversed narrow regions of electron precipitation and associated wave activity

  12. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  13. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  14. A comparative study of two phenomenological models of dephasing in series and parallel resistors

    International Nuclear Information System (INIS)

    Bandopadhyay, Swarnali; Chaudhuri, Debasish; Jayannavar, Arun M.

    2010-01-01

    We compare two recent phenomenological models of dephasing using a double barrier and a quantum ring geometry. While the stochastic absorption model generates controlled dephasing leading to Ohm's law for large dephasing strengths, a Gaussian random phase based statistical model shows many inconsistencies.

  15. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    Science.gov (United States)

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  16. Comparing Neural Networks and ARMA Models in Artificial Stock Market

    Czech Academy of Sciences Publication Activity Database

    Krtek, Jiří; Vošvrda, Miloslav

    2011-01-01

    Roč. 18, č. 28 (2011), s. 53-65 ISSN 1212-074X R&D Projects: GA ČR GD402/09/H045 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * vector ARMA * artificial market Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2011/E/krtek-comparing neural networks and arma models in artificial stock market.pdf

  17. Impact of rotavirus vaccination on hospitalisations in Belgium: comparing model predictions with observed data.

    Directory of Open Access Journals (Sweden)

    Baudouin Standaert

    Full Text Available BACKGROUND: Published economic assessments of rotavirus vaccination typically use modelling, mainly static Markov cohort models with birth cohorts followed up to the age of 5 years. Rotavirus vaccination has now been available for several years in some countries, and data have been collected to evaluate the real-world impact of vaccination on rotavirus hospitalisations. This study compared the economic impact of vaccination between model estimates and observed data on disease-specific hospitalisation reductions in a country for which both modelled and observed datasets exist (Belgium. METHODS: A previously published Markov cohort model estimated the impact of rotavirus vaccination on the number of rotavirus hospitalisations in children aged <5 years in Belgium using vaccine efficacy data from clinical development trials. Data on the number of rotavirus-positive gastroenteritis hospitalisations in children aged <5 years between 1 June 2004 and 31 May 2006 (pre-vaccination study period or 1 June 2007 to 31 May 2010 (post-vaccination study period were analysed from nine hospitals in Belgium and compared with the modelled estimates. RESULTS: The model predicted a smaller decrease in hospitalisations over time, mainly explained by two factors. First, the observed data indicated indirect vaccine protection in children too old or too young for vaccination. This herd effect is difficult to capture in static Markov cohort models and therefore was not included in the model. Second, the model included a 'waning' effect, i.e. reduced vaccine effectiveness over time. The observed data suggested this waning effect did not occur during that period, and so the model systematically underestimated vaccine effectiveness during the first 4 years after vaccine implementation. CONCLUSIONS: Model predictions underestimated the direct medical economic value of rotavirus vaccination during the first 4 years of vaccination by approximately 10% when assessing

  18. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  19. Comparing Numerical Spall Simulations with a Nonlinear Spall Formation Model

    Science.gov (United States)

    Ong, L.; Melosh, H. J.

    2012-12-01

    Spallation accelerates lightly shocked ejecta fragments to speeds that can exceed the escape velocity of the parent body. We present high-resolution simulations of nonlinear shock interactions in the near surface. Initial results show the acceleration of near-surface material to velocities up to 1.8 times greater than the peak particle velocity in the detached shock, while experiencing little to no shock pressure. These simulations suggest a possible nonlinear spallation mechanism to produce the high-velocity, low show pressure meteorites from other planets. Here we pre-sent the numerical simulations that test the production of spall through nonlinear shock interactions in the near sur-face, and compare the results with a model proposed by Kamegai (1986 Lawrence Livermore National Laboratory Report). We simulate near-surface shock interactions using the SALES_2 hydrocode and the Murnaghan equation of state. We model the shock interactions in two geometries: rectangular and spherical. In the rectangular case, we model a planar shock approaching the surface at a constant angle phi. In the spherical case, the shock originates at a point below the surface of the domain and radiates spherically from that point. The angle of the shock front with the surface is dependent on the radial distance of the surface point from the shock origin. We model the target as a solid with a nonlinear Murnaghan equation of state. This idealized equation of state supports nonlinear shocks but is tem-perature independent. We track the maximum pressure and maximum velocity attained in every cell in our simula-tions and compare them to the Hugoniot equations that describe the material conditions in front of and behind the shock. Our simulations demonstrate that nonlinear shock interactions in the near surface produce lightly shocked high-velocity material for both planar and cylindrical shocks. The spall is the result of the free surface boundary condi-tion, which forces a pressure gradient

  20. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    Science.gov (United States)

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  1. Thermodynamic Molecular Switch in Sequence-Specific Hydrophobic Interaction: Two Computational Models Compared

    Directory of Open Access Journals (Sweden)

    Paul Chun

    2003-01-01

    Full Text Available We have shown in our published work the existence of a thermodynamic switch in biological systems wherein a change of sign in ΔCp°(Treaction leads to a true negative minimum in the Gibbs free energy change of reaction, and hence, a maximum in the related Keq. We have examined 35 pair-wise, sequence-specific hydrophobic interactions over the temperature range of 273–333 K, based on data reported by Nemethy and Scheraga in 1962. A closer look at a single example, the pair-wise hydrophobic interaction of leucine-isoleucine, will demonstrate the significant differences when the data are analyzed using the Nemethy-Scheraga model or treated by the Planck-Benzinger methodology which we have developed. The change in inherent chemical bond energy at 0 K, ΔH°(T0 is 7.53 kcal mol-1 compared with 2.4 kcal mol-1, while ‹ts› is 365 K as compared with 355 K, for the Nemethy-Scheraga and Planck-Benzinger model, respectively. At ‹tm›, the thermal agitation energy is about five times greater than ΔH°(T0 in the Planck-Benzinger model, that is 465 K compared to 497 K in the Nemethy-Scheraga model. The results imply that the negative Gibbs free energy minimum at a well-defined ‹ts›, where TΔS° = 0 at about 355 K, has its origin in the sequence-specific hydrophobic interactions, which are highly dependent on details of molecular structure. The Nemethy-Scheraga model shows no evidence of the thermodynamic molecular switch that we have found to be a universal feature of biological interactions. The Planck-Benzinger method is the best known for evaluating the innate temperature-invariant enthalpy, ΔH°(T0, and provides for better understanding of the heat of reaction for biological molecules.

  2. Comparative studies on constitutive models for cohesive interface cracks of quasi-brittle materials

    International Nuclear Information System (INIS)

    Shen Xinpu; Shen Guoxiao; Zhou Lin

    2005-01-01

    In this paper, Concerning on the modelling of quasi-brittle fracture process zone at interface crack of quasi-brittle materials and structures, typical constitutive models of interface cracks were compared. Numerical calculations of the constitutive behaviours of selected models were carried out at local level. Aiming at the simulation of quasi-brittle fracture of concrete-like materials and structures, the emphases of the qualitative comparisons of selected cohesive models are focused on: (1) the fundamental mode I and mode II behaviours of selected models; (2) dilatancy properties of the selected models under mixed mode fracture loading conditions. (authors)

  3. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  4. A comparative study of turbulence models for dissolved air flotation flow analysis

    International Nuclear Information System (INIS)

    Park, Min A; Lee, Kyun Ho; Chung, Jae Dong; Seo, Seung Ho

    2015-01-01

    The dissolved air flotation (DAF) system is a water treatment process that removes contaminants by attaching micro bubbles to them, causing them to float to the water surface. In the present study, two-phase flow of air-water mixture is simulated to investigate changes in the internal flow analysis of DAF systems caused by using different turbulence models. Internal micro bubble distribution, velocity, and computation time are compared between several turbulence models for a given DAF geometry and condition. As a result, it is observed that the standard κ-ε model, which has been frequently used in previous research, predicts somewhat different behavior than other turbulence models

  5. Comparing different CFD wind turbine modelling approaches with wind tunnel measurements

    International Nuclear Information System (INIS)

    Kalvig, Siri; Hjertager, Bjørn; Manger, Eirik

    2014-01-01

    The performance of a model wind turbine is simulated with three different CFD methods: actuator disk, actuator line and a fully resolved rotor. The simulations are compared with each other and with measurements from a wind tunnel experiment. The actuator disk is the least accurate and most cost-efficient, and the fully resolved rotor is the most accurate and least cost-efficient. The actuator line method is believed to lie in between the two ends of the scale. The fully resolved rotor produces superior wake velocity results compared to the actuator models. On average it also produces better results for the force predictions, although the actuator line method had a slightly better match for the design tip speed. The open source CFD tool box, OpenFOAM, was used for the actuator disk and actuator line calculations, whereas the market leading commercial CFD code, ANSYS/FLUENT, was used for the fully resolved rotor approach

  6. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  7. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    Science.gov (United States)

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  8. A Comparative Assessment of Aerodynamic Models for Buffeting and Flutter of Long-Span Bridges

    Directory of Open Access Journals (Sweden)

    Igor Kavrakov

    2017-12-01

    Full Text Available Wind-induced vibrations commonly represent the leading criterion in the design of long-span bridges. The aerodynamic forces in bridge aerodynamics are mainly based on the quasi-steady and linear unsteady theory. This paper aims to investigate different formulations of self-excited and buffeting forces in the time domain by comparing the dynamic response of a multi-span cable-stayed bridge during the critical erection condition. The bridge is selected to represent a typical reference object with a bluff concrete box girder for large river crossings. The models are viewed from a perspective of model complexity, comparing the influence of the aerodynamic properties implied in the aerodynamic models, such as aerodynamic damping and stiffness, fluid memory in the buffeting and self-excited forces, aerodynamic nonlinearity, and aerodynamic coupling on the bridge response. The selected models are studied for a wind-speed range that is typical for the construction stage for two levels of turbulence intensity. Furthermore, a simplified method for the computation of buffeting forces including the aerodynamic admittance is presented, in which rational approximation is avoided. The critical flutter velocities are also compared for the selected models under laminar flow. Keywords: Buffeting, Flutter, Long-span bridges, Bridge aerodynamics, Bridge aeroelasticity, Erection stage

  9. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  10. A simplified MHD model of capillary Z-Pinch compared with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Shapolov, A.A.; Kiss, M.; Kukhlevsky, S.V. [Institute of Physics, University of Pecs (Hungary)

    2016-11-15

    The most accurate models of the capillary Z-pinches used for excitation of soft X-ray lasers and photolithography XUV sources currently are based on the magnetohydrodynamics theory (MHD). The output of MHD-based models greatly depends on details in the mathematical description, such as initial and boundary conditions, approximations of plasma parameters, etc. Small experimental groups who develop soft X-ray/XUV sources often use the simplest Z-pinch models for analysis of their experimental results, despite of these models are inconsistent with the MHD equations. In the present study, keeping only the essential terms in the MHD equations, we obtained a simplified MHD model of cylindrically symmetric capillary Z-pinch. The model gives accurate results compared to experiments with argon plasmas, and provides simple analysis of temporal evolution of main plasma parameters. The results clarify the influence of viscosity, heat flux and approximations of plasma conductivity on the dynamics of capillary Z-pinch plasmas. The model can be useful for researchers, especially experimentalists, who develop the soft X-ray/XUV sources. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods

    Science.gov (United States)

    Stergioulas, Nikolaos; Friedman, John L.

    1995-05-01

    We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser, & Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good, and error bars in the published numbers for maximum frequencies based on BI are dominated not by the code inaccuracy but by the number of models used to approximate a continuous sequence of stars. The BI code is faster per iteration, and it converges more rapidly at low density, while KEH converges more rapidly at high density; KEH also converges in regions where BI does not, allowing one to compute some models unstable against collapse that are inaccessible to the BI code. A relatively large discrepancy recently reported (Eriguchi et al. 1994) for models based on Friedman-Pandharipande equation of state is found to arise from the use of two different versions of the equation of state. For two representative equations of state, the two-dimensional space of equilibrium configurations is displayed as a surface in a three-dimensional space of angular momentum, mass, and central density. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) either all unstable to collapse or are all stable. In the first case, the stable model with maximum angular velocity is also the model with maximum mass, baryon mass, and angular momentum. In the second case, the stable models with maximum values of these quantities are all distinct. Our implementation of the KEH method will be available as a public domain program for interested users.

  12. Comparative study: TQ and Lean Production ownership models in health services.

    Science.gov (United States)

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. this is a qualitative research that was conducted through a descriptive case study. through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  13. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  14. Comparative analysis of diffused solar radiation models for optimum tilt angle determination for Indian locations

    International Nuclear Information System (INIS)

    Yadav, P.; Chandel, S.S.

    2014-01-01

    Tilt angle and orientation greatly are influenced on the performance of the solar photo voltaic panels. The tilt angle of solar photovoltaic panels is one of the important parameters for the optimum sizing of solar photovoltaic systems. This paper analyses six different isotropic and anisotropic diffused solar radiation models for optimum tilt angle determination. The predicted optimum tilt angles are compared with the experimentally measured values for summer season under outdoor conditions. The Liu and Jordan model is found to exhibit t lowest error as compared to other models for the location. (author)

  15. THE PROPAGATION OF UNCERTAINTIES IN STELLAR POPULATION SYNTHESIS MODELING. II. THE CHALLENGE OF COMPARING GALAXY EVOLUTION MODELS TO OBSERVATIONS

    International Nuclear Information System (INIS)

    Conroy, Charlie; Gunn, James E.; White, Martin

    2010-01-01

    Models for the formation and evolution of galaxies readily predict physical properties such as star formation rates, metal-enrichment histories, and, increasingly, gas and dust content of synthetic galaxies. Such predictions are frequently compared to the spectral energy distributions of observed galaxies via the stellar population synthesis (SPS) technique. Substantial uncertainties in SPS exist, and yet their relevance to the task of comparing galaxy evolution models to observations has received little attention. In the present work, we begin to address this issue by investigating the importance of uncertainties in stellar evolution, the initial stellar mass function (IMF), and dust and interstellar medium (ISM) properties on the translation from models to observations. We demonstrate that these uncertainties translate into substantial uncertainties in the ultraviolet, optical, and near-infrared colors of synthetic galaxies. Aspects that carry significant uncertainties include the logarithmic slope of the IMF above 1 M sun , dust attenuation law, molecular cloud disruption timescale, clumpiness of the ISM, fraction of unobscured starlight, and treatment of advanced stages of stellar evolution including blue stragglers, the horizontal branch, and the thermally pulsating asymptotic giant branch. The interpretation of the resulting uncertainties in the derived colors is highly non-trivial because many of the uncertainties are likely systematic, and possibly correlated with the physical properties of galaxies. We therefore urge caution when comparing models to observations.

  16. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  17. Efem vs. XFEM: a comparative study for modeling strong discontinuity in geomechanics

    OpenAIRE

    Das, Kamal C.; Ausas, Roberto Federico; Segura Segarra, José María; Narang, Ankur; Rodrigues, Eduardo; Carol, Ignacio; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2015-01-01

    Modeling of big faults or weak planes of strong and weak discontinuities is of major importance to assess the Geomechanical behaviour of mining/civil tunnel, reservoirs etc. For modelling fractures in Geomechanics, prior art has been limited to Interface Elements which suffer from numerical instability and where faults are required to be aligned with element edges. In this paper, we consider comparative study on finite elements for capturing strong discontinuities by means of elemental (EFEM)...

  18. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  19. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  20. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...

  1. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    Science.gov (United States)

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  2. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    . The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... the performance of the studied PV plants with others, the efficiency of the systems has been estimated by both conventional Performance Ratio and Corrected Performance Ratio...

  3. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  4. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive

  5. Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Holstein-Rathlou, N H; Marsh, D J

    1998-01-01

    kernel estimation method based on Laguerre expansions. The results for the two types of artificial neural networks and the Volterra models are comparable in terms of normalized mean square error (NMSE) of the respective output prediction for independent testing data. However, the Volterra models obtained...

  6. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    Science.gov (United States)

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  7. Comparative systems biology between human and animal models based on next-generation sequencing methods.

    Science.gov (United States)

    Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei

    2013-04-01

    Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.

  8. A Field Guide to Extra-Tropical Cyclones: Comparing Models to Observations

    Science.gov (United States)

    Bauer, M.

    2008-12-01

    Climate it is said is the accumulation of weather. And weather is not the concern of climate models. Justification for this latter sentiment has long hidden behind coarse model resolutions and blunt validation tools based on climatological maps and the like. The spatial-temporal resolutions of today's models and observations are converging onto meteorological scales however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough, or at least lacks perverting biases, such that its accumulation does in fact result in a robust climate prediction. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from climate model output. These include the usual cyclone distribution statistics (maps, histograms), but also adaptive cyclone- centric composites. We have also created a complementary dataset, The MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid- latitude cyclones based on Reanalysis products. Using this we then extract complimentary composites from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools will be shown. dime.giss.nasa.gov/mcms/mcms.html

  9. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  10. Static response of deformable microchannels: a comparative modelling study

    Science.gov (United States)

    Shidhore, Tanmay C.; Christov, Ivan C.

    2018-02-01

    We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.

  11. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    Science.gov (United States)

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  12. Methodology and results of the impacts of modeling electric utilities: a comparative evaluation of MEMM and REM

    International Nuclear Information System (INIS)

    1981-09-01

    This study compares two models of the US electric utility industry including the EIA's electric utility submodel in the Midterm Energy Market Model (MEMM), and the Baughman-Joskow Regionalized Electricity Model (REM). The method of comparison emphasizes reconciliation of differences in data common to both models, and the performance of simulation experiments to evaluate the empirical significance of certain structural differences in the models. The major research goal was to contrast and compare the effects of alternative modeling structures and data assumptions on model results; and, particularly to considered each model's approach to the impacts of generation technology and fuel use choices on electric utilities. The methodology used was to run the REM model first without and, then, with a representation of the Power Plant and Industrial Fuel Act of 1978, assuming medium supply and demand curves and varying fuel prices. The models and data structures of the two models are described. The original 1978 data used in MEMM and REM are analyzed and compared. The computations and effects of different assumptions on fuel use decisions are discussed. The adjusted REM data required for the experiments are presented. Simulation results of the two models are compared. These results represent projections for 1985, 1990, and 1995 of: US power generation by plant type; amounts of each type of fuel used for power generation; average electricity prices; and the effects of additional or fewer nuclear and coal-fired plants. A significant result is that the REM model exhibits about 7 times as much gas and oil consumption in 1995 as the MEMM model. Continuing simulation experiments on MEMM are recommended to determine whether the input data to MEMM are reasonable and properly adjusted

  13. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  14. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  15. Comparing regional precipitation and temperature extremes in climate model and reanalysis products

    Directory of Open Access Journals (Sweden)

    Oliver Angélil

    2016-09-01

    Full Text Available A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averages over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.

  16. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    Science.gov (United States)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark

  17. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  18. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  19. A comparative analysis of diffusion and transport models applying to releases in the marine environment

    International Nuclear Information System (INIS)

    Mejon, M.J.

    1984-05-01

    This study is a contribution to the development of methodologies allowing to assess the radiological impact of liquid effluent releases from nuclear power plants. It first concerns hydrodynamics models and their applications to the North sea, which is of great interest to the European Community. Starting from basic equations of geophysical fluid mechanics, the assumptions made at each step in order to simplifly resolution are analysed and commented. The results published on the application of the Liege University models (NIHOUL, RONDAY et al.) are compared to observations both on tides and tempests and residual circulation which is responsible for the long-terme transport of pollutants. The results for residual circulation compare satisfactorily, and the expected accuracy of the other models is indicated. A dispersion model by the same authors is then studied with a numerical integration method using a moving grid. Others models (Laboratoire National d'Hydraulique, EDF) used for the Channel, are also presented [fr

  20. A comparative study of the tail ion distribution with reduced Fokker-Planck models

    Science.gov (United States)

    McDevitt, C. J.; Tang, Xian-Zhu; Guo, Zehua; Berk, H. L.

    2014-03-01

    A series of reduced models are used to study the fast ion tail in the vicinity of a transition layer between plasmas at disparate temperatures and densities, which is typical of the gas and pusher interface in inertial confinement fusion targets. Emphasis is placed on utilizing progressively more comprehensive models in order to identify the essential physics for computing the fast ion tail at energies comparable to the Gamow peak. The resulting fast ion tail distribution is subsequently used to compute the fusion reactivity as a function of collisionality and temperature. While a significant reduction of the fusion reactivity in the hot spot compared to the nominal Maxwellian case is present, this reduction is found to be partially recovered by an increase of the fusion reactivity in the neighboring cold region.

  1. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  2. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    Science.gov (United States)

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  3. Comparing GIS-based habitat models for applications in EIA and SEA

    International Nuclear Information System (INIS)

    Gontier, Mikael; Moertberg, Ulla; Balfors, Berit

    2010-01-01

    Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The models were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.

  4. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-01-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performe...

  5. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  6. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    Science.gov (United States)

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  7. Application of a method for comparing one-dimensional and two-dimensional models of a ground-water flow system

    International Nuclear Information System (INIS)

    Naymik, T.G.

    1978-01-01

    To evaluate the inability of a one-dimensional ground-water model to interact continuously with surrounding hydraulic head gradients, simulations using one-dimensional and two-dimensional ground-water flow models were compared. This approach used two types of models: flow-conserving one-and-two dimensional models, and one-dimensional and two-dimensional models designed to yield two-dimensional solutions. The hydraulic conductivities of controlling features were varied and model comparison was based on the travel times of marker particles. The solutions within each of the two model types compare reasonably well, but a three-dimensional solution is required to quantify the comparison

  8. Towards a systemic functional model for comparing forms of discourse in academic writing Towards a systemic functional model for comparing forms of discourse in academic writing

    Directory of Open Access Journals (Sweden)

    Meriel Bloor

    2008-04-01

    Full Text Available This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered and the implications of the findings are related to models of text and discourse. Recommendations are made for developing domain models that relate clusters of features to positions on a cline. This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered

  9. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  10. Comparative Analysis of Photogrammetric Methods for 3D Models for Museums

    DEFF Research Database (Denmark)

    Hafstað Ármannsdottir, Unnur Erla; Antón Castro, Francesc/François; Mioc, Darka

    2014-01-01

    The goal of this paper is to make a comparative analysis and selection of methodologies for making 3D models of historical items, buildings and cultural heritage and how to preserve information such as temporary exhibitions and archaeological findings. Two of the methodologies analyzed correspond...... matrix has been used. Prototypes are made partly or fully and evaluated from the point of view of preservation of information by a museum....

  11. MODELLING OF FINANCIAL EFFECTIVENESS AND COMPARATIVE ANALYSIS OF PUBLIC-PRIVATE PARTNERSHIP PROJECTS AND PUBLIC PROCUREMENT

    Directory of Open Access Journals (Sweden)

    Kuznetsov Aleksey Alekseevich

    2017-10-01

    Full Text Available The article substantiates the necessity of extension and development of tools for methodological evaluation of effectiveness of public-private partnership (PPP projects both individually and in comparison of effectiveness of various mechanisms of projects realization on the example of traditional public procurement. The author proposed an original technique of modelling cash flows of private and public partners when realizing the projects based on PPP and on public procurement. The model enables us promptly and with sufficient accuracy to reveal comparative advantages of project forms of PPP and public procurement, and also assess financial effectiveness of the PPP projects for each partner. The modelling is relatively straightforward and reliable. The model also enables us to evaluate public partner's expenses for availability, find the terms and thresholds for interest rates of financing attracted by the partners and for risk probabilities to ensure comparative advantage of PPP project. Proposed criteria of effectiveness are compared with methodological recommendations provided by the Ministry of Economic Development of the Russian Federation. Subject: public and private organizations, financial institutions, development institutions and their theoretical and practical techniques for effectiveness evaluation of public-private partnership (PPP projects. Complexity of effectiveness evaluation and the lack of unified and accepted methodology are among the factors that limit the development of PPP in the Russian Federation nowadays. Research objectives: development of methodological methods for assessing financial efficiency of PPP projects by creating and justifying application of new principles and methods of modelling, and also criteria for effectiveness of PPP projects both individually and in comparison with the public procurement. Materials and methods: open database of ongoing PPP projects in the Russian Federation and abroad was used. The

  12. Comparative study of wall-force models for the simulation of bubbly flows

    Energy Technology Data Exchange (ETDEWEB)

    Rzehak, Roland, E-mail: r.rzehak@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Krepper, Eckhard, E-mail: E.Krepper@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Lifante, Conxita, E-mail: Conxita.Lifante@ansys.com [ANSYS Germany GmbH, Staudenfeldweg 12, 83624 Otterfing (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Comparison of common models for the wall force with an experimental database. Black-Right-Pointing-Pointer Identification of suitable closure for bubbly flow. Black-Right-Pointing-Pointer Enables prediction of location and height of wall peak in void fraction profiles. - Abstract: Accurate numerical prediction of void-fraction profiles in bubbly multiphase-flow relies on suitable closure models for the momentum exchange between liquid and gas phases. We here consider forces acting on the bubbles in the vicinity of a wall. A number of different models for this so-called wall-force have been proposed in the literature and are implemented in widely used CFD-codes. Simulations using a selection of these models are compared with a set of experimental data on bubbly air-water flow in round pipes of different diameter. Based on the results, recommendations on suitable closures are given.

  13. Comparing wall modeled LES and prescribed boundary layer approach in infinite wind farm simulations

    DEFF Research Database (Denmark)

    Sarlak, Hamid; Mikkelsen, Robert; Sørensen, Jens Nørkær

    2015-01-01

    be imposed to study the wake and dynamics of vortices. The methodology is used for simulation of interactions of an infinitely long wind farm with the neutral ABL. Flow statistics are compared with the WMLES computations in terms of mean velocity as well as higher order statistical moments. The results......This paper aims at presenting a simple and computationally fast method for simulation of the Atmospheric Boundary Layer (ABL) and comparing the results with the commonly used wall-modelled Large Eddy Simulation (WMLES). The simple method, called Prescribed Mean Shear and Turbulence (PMST) hereafter......, is based on imposing body forces over the whole domain to maintain a desired unsteady ow, where the ground is modeled as a slip-free boundary which in return hampers the need for grid refinement and/or wall modeling close to the solid walls. Another strength of this method besides being computationally...

  14. Comparing holographic dark energy models with statefinder

    International Nuclear Information System (INIS)

    Cui, Jing-Lei; Zhang, Jing-Fei

    2014-01-01

    We apply the statefinder diagnostic to the holographic dark energy models, including the original holographic dark energy (HDE) model, the new holographic dark energy model, the new agegraphic dark energy (NADE) model, and the Ricci dark energy model. In the low-redshift region the holographic dark energy models are degenerate with each other and with the ΛCDM model in the H(z) and q(z) evolutions. In particular, the HDE model is highly degenerate with the ΛCDM model, and in the HDE model the cases with different parameter values are also in strong degeneracy. Since the observational data are mainly within the low-redshift region, it is very important to break this lowredshift degeneracy in the H(z) and q(z) diagnostics by using some quantities with higher order derivatives of the scale factor. It is shown that the statefinder diagnostic r(z) is very useful in breaking the low-redshift degeneracies. By employing the statefinder diagnostic the holographic dark energy models can be differentiated efficiently in the low-redshift region. The degeneracy between the holographic dark energy models and the ΛCDM model can also be broken by this method. Especially for the HDE model, all the previous strong degeneracies appearing in the H(z) and q(z) diagnostics are broken effectively. But for the NADE model, the degeneracy between the cases with different parameter values cannot be broken, even though the statefinder diagnostic is used. A direct comparison of the holographic dark energy models in the r-s plane is also made, in which the separations between the models (including the ΛCDM model) can be directly measured in the light of the current values {r 0 , s 0 } of the models. (orig.)

  15. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  16. A comparative study of manhole hydraulics using stereoscopic PIV and different RANS models.

    Science.gov (United States)

    Beg, Md Nazmul Azim; Carvalho, Rita F; Tait, Simon; Brevis, Wernher; Rubinato, Matteo; Schellart, Alma; Leandro, Jorge

    2017-04-01

    Flows in manholes are complex and may include swirling and recirculation flow with significant turbulence and vorticity. However, how these complex 3D flow patterns could generate different energy losses and so affect flow quantity in the wider sewer network is unknown. In this work, 2D3C stereo Particle Image Velocimetry measurements are made in a surcharged scaled circular manhole. A computational fluid dynamics (CFD) model in OpenFOAM ® with four different Reynolds Averaged Navier Stokes (RANS) turbulence model is constructed using a volume of fluid model, to represent flows in this manhole. Velocity profiles and pressure distributions from the models are compared with the experimental data in view of finding the best modelling approach. It was found among four different RANS models that the re-normalization group (RNG) k-ɛ and k-ω shear stress transport (SST) gave a better approximation for velocity and pressure.

  17. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    OpenAIRE

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-01-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated healt...

  18. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  19. Comparing two non-equilibrium approaches to modelling of a free-burning arc

    International Nuclear Information System (INIS)

    Baeva, M; Uhrlandt, D; Benilov, M S; Cunha, M D

    2013-01-01

    Two models of high-pressure arc discharges are compared with each other and with experimental data for an atmospheric-pressure free-burning arc in argon for arc currents of 20–200 A. The models account for space-charge effects and thermal and ionization non-equilibrium in somewhat different ways. One model considers space-charge effects, thermal and ionization non-equilibrium in the near-cathode region and thermal non-equilibrium in the bulk plasma. The other model considers thermal and ionization non-equilibrium in the entire arc plasma and space-charge effects in the near-cathode region. Both models are capable of predicting the arc voltage in fair agreement with experimental data. Differences are observed in the arc attachment to the cathode, which do not strongly affect the near-cathode voltage drop and the total arc voltage for arc currents exceeding 75 A. For lower arc currents the difference is significant but the arc column structure is quite similar and the predicted bulk plasma characteristics are relatively close to each other. (paper)

  20. Comparing two models for post-wildfire debris flow susceptibility mapping

    Science.gov (United States)

    Cramer, J.; Bursik, M. I.; Legorreta Paulin, G.

    2017-12-01

    Traditionally, probabilistic post-fire debris flow susceptibility mapping has been performed based on the typical method of failure for debris flows/landslides, where slip occurs along a basal shear zone as a result of rainfall infiltration. Recent studies have argued that post-fire debris flows are fundamentally different in their method of initiation, which is not infiltration-driven, but surface runoff-driven. We test these competing models by comparing the accuracy of the susceptibility maps produced by each initiation method. Debris flow susceptibility maps are generated according to each initiation method for a mountainous region of Southern California that recently experienced wildfire and subsequent debris flows. A multiple logistic regression (MLR), which uses the occurrence of past debris flows and the values of environmental parameters, was used to determine the probability of future debris flow occurrence. The independent variables used in the MLR are dependent on the initiation method; for example, depth to slip plane, and shear strength of soil are relevant to the infiltration initiation, but not surface runoff. A post-fire debris flow inventory serves as the standard to compare the two susceptibility maps, and was generated by LiDAR analysis and field based ground-truthing. The amount of overlap between the true locations where debris flow erosion can be documented, and where the MLR predicts high probability of debris flow initiation was statistically quantified. The Figure of Merit in Space (FMS) was used to compare the two models, and the results of the FMS comparison suggest that surface runoff-driven initiation better explains debris flow occurrence. Wildfire can breed conditions that induce debris flows in areas that normally would not be prone to them. Because of this, nearby communities at risk may not be equipped to protect themselves against debris flows. In California, there are just a few months between wildland fire season and the wet

  1. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  2. Representing macropore flow at the catchment scale: a comparative modeling study

    Science.gov (United States)

    Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.

    2017-12-01

    Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.

  3. Comparative study of non-premixed and partially-premixed combustion simulations in a realistic Tay model combustor

    OpenAIRE

    Zhang, K.; Ghobadian, A.; Nouri, J. M.

    2017-01-01

    A comparative study of two combustion models based on non-premixed assumption and partially premixed assumptions using the overall models of Zimont Turbulent Flame Speed Closure Method (ZTFSC) and Extended Coherent Flamelet Method (ECFM) are conducted through Reynolds stress turbulence modelling of Tay model gas turbine combustor for the first time. The Tay model combustor retains all essential features of a realistic gas turbine combustor. It is seen that the non-premixed combustion model fa...

  4. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  5. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  6. Comparative analysis of methods and tools for open and closed fuel cycles modeling: MESSAGE and DESAE

    International Nuclear Information System (INIS)

    Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.

    2006-01-01

    Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru

  7. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    Science.gov (United States)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  8. Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data

    Science.gov (United States)

    Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.

    2007-01-01

    The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.

  9. A comparative study of the use of different risk-assessment models in Danish municipalities

    DEFF Research Database (Denmark)

    Sørensen, Kresta Munkholt

    2018-01-01

    Risk-assessment models are widely used in casework involving vulnerable children and families. Internationally, there are a number of different kinds of models with great variation in regard to the characteristics of factors that harm children. Lists of factors have been made but most of them give...... very little advice on how the factors should be weighted. This paper will address the use of risk-assessment models in six different Danish municipalities. The paper presents a comparative analysis and discussion of differences and similarities between three models: the Integrated Children’s System...... (ICS), the Signs of Safety (SoS) model and models developed by the municipalities themselves (MM). The analysis will answer the following two key questions: (i) to which risk and protective factors do the caseworkers give most weight in the risk assessment? and (ii) does each of the different models...

  10. Cold Nuclear Matter effects on J/psi production at RHIC: comparing shadowing models

    Energy Technology Data Exchange (ETDEWEB)

    Ferreiro, E.G.; /Santiago de Compostela U.; Fleuret, F.; /Ecole Polytechnique; Lansberg, J.P.; /SLAC; Rakotozafindrabe, A.; /SPhN, DAPNIA, Saclay

    2009-06-19

    We present a wide study on the comparison of different shadowing models and their influence on J/{psi} production. We have taken into account the possibility of different partonic processes for the c{bar c}-pair production. We notice that the effect of shadowing corrections on J/{psi} production clearly depends on the partonic process considered. Our results are compared to the available data on dAu collisions at RHIC energies. We try different break up cross section for each of the studied shadowing models.

  11. Comparing Apples to Apples: Paleoclimate Model-Data comparison via Proxy System Modeling

    Science.gov (United States)

    Dee, Sylvia; Emile-Geay, Julien; Evans, Michael; Noone, David

    2014-05-01

    The wealth of paleodata spanning the last millennium (hereinafter LM) provides an invaluable testbed for CMIP5-class GCMs. However, comparing GCM output to paleodata is non-trivial. High-resolution paleoclimate proxies generally contain a multivariate and non-linear response to regional climate forcing. Disentangling the multivariate environmental influences on proxies like corals, speleothems, and trees can be complex due to spatiotemporal climate variability, non-stationarity, and threshold dependence. Given these and other complications, many paleodata-GCM comparisons take a leap of faith, relating climate fields (e.g. precipitation, temperature) to geochemical signals in proxy data (e.g. δ18O in coral aragonite or ice cores) (e.g. Braconnot et al., 2012). Isotope-enabled GCMs are a step in the right direction, with water isotopes providing a connector point between GCMs and paleodata. However, such studies are still rare, and isotope fields are not archived as part of LM PMIP3 simulations. More importantly, much of the complexity in how proxy systems record and transduce environmental signals remains unaccounted for. In this study we use proxy system models (PSMs, Evans et al., 2013) to bridge this conceptual gap. A PSM mathematically encodes the mechanistic understanding of the physical, geochemical and, sometimes biological influences on each proxy. To translate GCM output to proxy space, we have synthesized a comprehensive, consistently formatted package of published PSMs, including δ18O in corals, tree ring cellulose, speleothems, and ice cores. Each PSM is comprised of three sub-models: sensor, archive, and observation. For the first time, these different components are coupled together for four major proxy types, allowing uncertainties due to both dating and signal interpretation to be treated within a self-consistent framework. The output of this process is an ensemble of many (say N = 1,000) realizations of the proxy network, all equally plausible

  12. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  13. MAX-DOAS tropospheric nitrogen dioxide column measurements compared with the Lotos-Euros air quality model

    NARCIS (Netherlands)

    Vlemmix, T.; Eskes, H.J.; Piters, A.J.M.; Schaap, M.; Sauter, F.J.; Kelder, H.; Levelt, P.F.

    2015-01-01

    A 14-month data set of MAX-DOAS (Multi-Axis Differential Optical Absorption Spectroscopy) tropospheric NO2 column observations in De Bilt, the Netherlands, has been compared with the regional air quality model Lotos-Euros. The model was run on a 7×7 km2 grid, the same resolution as the emission

  14. Comparing flow-through and static ice cave models for Shoshone Ice Cave

    Directory of Open Access Journals (Sweden)

    Kaj E. Williams

    2015-05-01

    Full Text Available In this paper we suggest a new ice cave type: the “flow-through” ice cave. In a flow-through ice cave external winds blow into the cave and wet cave walls chill the incoming air to the wet-bulb temperature, thereby achieving extra cooling of the cave air. We have investigated an ice cave in Idaho, located in a lava tube that is reported to have airflow through porous wet end-walls and could therefore be a flow-through cave. We have instrumented the site and collected data for one year. In order to determine the actual ice cave type present at Shoshone, we have constructed numerical models for static and flow-through caves (dynamic is not relevant here. The models are driven with exterior measurements of air temperature, relative humidity and wind speed. The model output is interior air temperature and relative humidity. We then compare the output of both models to the measured interior air temperatures and relative humidity. While both the flow-through and static cave models are capable of preserving ice year-round (a net zero or positive ice mass balance, both models show very different cave air temperature and relative humidity output. We find the empirical data support a hybrid model of the static and flow-through models: permitting a static ice cave to have incoming air chilled to the wet-bulb temperature fits the data best for the Shoshone Ice Cave.

  15. Comparative Study of Injury Models for Studying Muscle Regeneration in Mice.

    Directory of Open Access Journals (Sweden)

    David Hardy

    Full Text Available A longstanding goal in regenerative medicine is to reconstitute functional tissues or organs after injury or disease. Attention has focused on the identification and relative contribution of tissue specific stem cells to the regeneration process. Relatively little is known about how the physiological process is regulated by other tissue constituents. Numerous injury models are used to investigate tissue regeneration, however, these models are often poorly understood. Specifically, for skeletal muscle regeneration several models are reported in the literature, yet the relative impact on muscle physiology and the distinct cells types have not been extensively characterised.We have used transgenic Tg:Pax7nGFP and Flk1GFP/+ mouse models to respectively count the number of muscle stem (satellite cells (SC and number/shape of vessels by confocal microscopy. We performed histological and immunostainings to assess the differences in the key regeneration steps. Infiltration of immune cells, chemokines and cytokines production was assessed in vivo by Luminex®.We compared the 4 most commonly used injury models i.e. freeze injury (FI, barium chloride (BaCl2, notexin (NTX and cardiotoxin (CTX. The FI was the most damaging. In this model, up to 96% of the SCs are destroyed with their surrounding environment (basal lamina and vasculature leaving a "dead zone" devoid of viable cells. The regeneration process itself is fulfilled in all 4 models with virtually no fibrosis 28 days post-injury, except in the FI model. Inflammatory cells return to basal levels in the CTX, BaCl2 but still significantly high 1-month post-injury in the FI and NTX models. Interestingly the number of SC returned to normal only in the FI, 1-month post-injury, with SCs that are still cycling up to 3-months after the induction of the injury in the other models.Our studies show that the nature of the injury model should be chosen carefully depending on the experimental design and desired

  16. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  17. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  18. Comparing convective heat fluxes derived from thermodynamics to a radiative-convective model and GCMs

    Science.gov (United States)

    Dhara, Chirag; Renner, Maik; Kleidon, Axel

    2015-04-01

    The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.

  19. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  20. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    Science.gov (United States)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  1. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  2. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  3. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    OpenAIRE

    Sorapak Pukdesree

    2017-01-01

    The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two g...

  4. The Comparative Study of Collaborative Learning and SDLC Model to develop IT Group Projects

    Directory of Open Access Journals (Sweden)

    Sorapak Pukdesree

    2017-11-01

    Full Text Available The main objectives of this research were to compare the attitudes of learners between applying SDLC model with collaborative learning and typical SDLC model and to develop electronic courseware as group projects. The research was a quasi-experimental research. The populations of the research were students who took Computer Organization and Architecture course in the academic year 2015. There were 38 students who participated to the research. The participants were divided voluntary into two groups including an experimental group with 28 students using SDLC model with collaborative learning and a control group with 10 students using typical SDLC model. The research instruments were attitude questionnaire, semi-structured interview and self-assessment questionnaire. The collected data was analysed by arithmetic mean, standard deviation, and independent sample t-test. The results of the questionnaire revealed that the attitudes of the learners using collaborative learning and SDLC model were statistically significant difference between the mean score for experimental group and control group at a significance level of 0.05. The independent statistical analyses were significantly different between the two groups at a significance level of 0.05. The results of the interviewing revealed that most of the learners had the corresponding opinions that collaborative learning was very useful with highest level of their attitudes comparing with the previous methodology. Learners had left some feedbacks that collaborative learning should be applied to other courses.

  5. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  6. Case management: a randomized controlled study comparing a neighborhood team and a centralized individual model.

    Science.gov (United States)

    Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B

    1991-10-01

    This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated health services costs, the average annual cost during 1983-85 for team cases was 13.6 percent less than that of individual model cases. While the team cases were 18.3 percent less expensive among "old" patients (patients who entered the study from the existing ACCESS caseload), they were only 2.7 percent less costly among "new" cases. The lower costs were due to reductions in hospital days and home care. Team cases averaged 26 percent fewer hospital days per year and 17 percent fewer home health aide hours. Nursing home use was 48 percent higher for the team group than for the individual model group. Mortality was almost exactly the same for both groups during the first year (about 30 percent), but was lower for team patients during the second year (11 percent as compared to 16 percent). Probable mechanisms for the observed results are discussed.

  7. sbml-diff: A Tool for Visually Comparing SBML Models in Synthetic Biology.

    Science.gov (United States)

    Scott-Brown, James; Papachristodoulou, Antonis

    2017-07-21

    We present sbml-diff, a tool that is able to read a model of a biochemical reaction network in SBML format and produce a range of diagrams showing different levels of detail. Each diagram type can be used to visualize a single model or to visually compare two or more models. The default view depicts species as ellipses, reactions as rectangles, rules as parallelograms, and events as diamonds. A cartoon view replaces the symbols used for reactions on the basis of the associated Systems Biology Ontology terms. An abstract view represents species as ellipses and draws edges between them to indicate whether a species increases or decreases the production or degradation of another species. sbml-diff is freely licensed under the three-clause BSD license and can be downloaded from https://github.com/jamesscottbrown/sbml-diff and used as a python package called from other software, as a free-standing command-line application, or online using the form at http://sysos.eng.ox.ac.uk/tebio/upload.

  8. Identification of material parameters for plasticity models: A comparative study on the finite element model updating and the virtual fields method

    Science.gov (United States)

    Martins, J. M. P.; Thuillier, S.; Andrade-Campos, A.

    2018-05-01

    The identification of material parameters, for a given constitutive model, can be seen as the first step before any practical application. In the last years, the field of material parameters identification received an important boost with the development of full-field measurement techniques, such as Digital Image Correlation. These techniques enable the use of heterogeneous displacement/strain fields, which contain more information than the classical homogeneous tests. Consequently, different techniques have been developed to extract material parameters from full-field measurements. In this study, two of these techniques are addressed, the Finite Element Model Updating (FEMU) and the Virtual Fields Method (VFM). The main idea behind FEMU is to update the parameters of a constitutive model implemented in a finite element model until both numerical and experimental results match, whereas VFM makes use of the Principle of Virtual Work and does not require any finite element simulation. Though both techniques proved their feasibility in linear and non-linear constitutive models, it is rather difficult to rank their robustness in plasticity. The purpose of this work is to perform a comparative study in the case of elasto-plastic models. Details concerning the implementation of each strategy are presented. Moreover, a dedicated code for VFM within a large strain framework is developed. The reconstruction of the stress field is performed through a user subroutine. A heterogeneous tensile test is considered to compare FEMU and VFM strategies.

  9. A COMPARATIVE STUDY OF FORECASTING MODELS FOR TREND AND SEASONAL TIME SERIES DOES COMPLEX MODEL ALWAYS YIELD BETTER FORECAST THAN SIMPLE MODELS

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2005-01-01

    Full Text Available Many business and economic time series are non-stationary time series that contain trend and seasonal variations. Seasonality is a periodic and recurrent pattern caused by factors such as weather, holidays, or repeating promotions. A stochastic trend is often accompanied with the seasonal variations and can have a significant impact on various forecasting methods. In this paper, we will investigate and compare some forecasting methods for modeling time series with both trend and seasonal patterns. These methods are Winter's, Decomposition, Time Series Regression, ARIMA and Neural Networks models. In this empirical research, we study on the effectiveness of the forecasting performance, particularly to answer whether a complex method always give a better forecast than a simpler method. We use a real data, that is airline passenger data. The result shows that the more complex model does not always yield a better result than a simpler one. Additionally, we also find the possibility to do further research especially the use of hybrid model by combining some forecasting method to get better forecast, for example combination between decomposition (as data preprocessing and neural network model.

  10. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  11. Using the Landlab toolkit to evaluate and compare alternative geomorphic and hydrologic model formulations

    Science.gov (United States)

    Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.

    2016-12-01

    Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision

  12. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Comparing the Goodness of Different Statistical Criteria for Evaluating the Soil Water Infiltration Models

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2016-02-01

    Full Text Available Introduction: The infiltration process is one of the most important components of the hydrologic cycle. Quantifying the infiltration water into soil is of great importance in watershed management. Prediction of flooding, erosion and pollutant transport all depends on the rate of runoff which is directly affected by the rate of infiltration. Quantification of infiltration water into soil is also necessary to determine the availability of water for crop growth and to estimate the amount of additional water needed for irrigation. Thus, an accurate model is required to estimate infiltration of water into soil. The ability of physical and empirical models in simulation of soil processes is commonly measured through comparisons of simulated and observed values. For these reasons, a large variety of indices have been proposed and used over the years in comparison of infiltration water into soil models. Among the proposed indices, some are absolute criteria such as the widely used root mean square error (RMSE, while others are relative criteria (i.e. normalized such as the Nash and Sutcliffe (1970 efficiency criterion (NSE. Selecting and using appropriate statistical criteria to evaluate and interpretation of the results for infiltration water into soil models is essential because each of the used criteria focus on specific types of errors. Also, descriptions of various goodness of fit indices or indicators including their advantages and shortcomings, and rigorous discussions on the suitability of each index are very important. The objective of this study is to compare the goodness of different statistical criteria to evaluate infiltration of water into soil models. Comparison techniques were considered to define the best models: coefficient of determination (R2, root mean square error (RMSE, efficiency criteria (NSEI and modified forms (such as NSEjI, NSESQRTI, NSElnI and NSEiI. Comparatively little work has been carried out on the meaning and

  14. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  15. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Directory of Open Access Journals (Sweden)

    Daminov Ildar

    2016-01-01

    Full Text Available This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  16. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Science.gov (United States)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  17. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  18. EPA ALPHA Modeling of a Conventional Mid-Size Car with CVT and Comparable Powertrain Technologies (SAE 2016-01-1141)

    Science.gov (United States)

    This paper presents the testing and ALPHA modeling of a CVT-equipped 2013 Nissan Altima 2.5S using comparable powertrain technology inputs in the effort to model the current and future U.S. light-duty vehicle fleet approximated using components with comparable levels of performan...

  19. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  20. Culture-related service expectations: a comparative study using the Kano model.

    Science.gov (United States)

    Hejaili, Fayez F; Assad, Lina; Shaheen, Faissal A; Moussa, Dujana H; Karkar, Ayman; AlRukhaimi, Mona; Barhamein, Majdah; Al Suwida, Abdulkareem; Al Alhejaili, Faris F; Al Harbi, Ali S; Al Homrany, Mohamed; Attar, Bisher; Al-Sayyari, Abdulla A

    2009-01-01

    To compare service expectations between Arab and Austrian patients. We used a Kano model-based questionnaire with 20 service attributes of relevance to the dialysis patient. We analyzed 530, 172, 60, and 68 responses from Saudi, Austrian, Syrian, and UAE patients, respectively. We compared the customer satisfaction coefficient and the frequencies of response categories ("must be," "attractive," "one-dimensional," and "indifferent") for each of the 20 service attributes and in each of the 3 national groups of patients. We also investigated whether any differences seen were related to sex, age, literacy rate, or duration on dialysis. We observed higher satisfaction coefficients and "one-directional" responses among Arab patients and higher dissatisfaction coefficients and "must be" and "attractive" responses among Austrian patients. These were not related to age or duration on dialysis but were related to literacy rate. We speculate that these discrepancies between Austrian and Arab patients might be related to underdeveloped sophistication in market competitive forces and to cultural influences.

  1. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  2. Comparative metabolomics of drought acclimation in model and forage legumes.

    Science.gov (United States)

    Sanchez, Diego H; Schwabe, Franziska; Erban, Alexander; Udvardi, Michael K; Kopka, Joachim

    2012-01-01

    Water limitation has become a major concern for agriculture. Such constraints reinforce the urgent need to understand mechanisms by which plants cope with water deprivation. We used a non-targeted metabolomic approach to explore plastic systems responses to non-lethal drought in model and forage legume species of the Lotus genus. In the model legume Lotus. japonicus, increased water stress caused gradual increases of most of the soluble small molecules profiled, reflecting a global and progressive reprogramming of metabolic pathways. The comparative metabolomic approach between Lotus species revealed conserved and unique metabolic responses to drought stress. Importantly, only few drought-responsive metabolites were conserved among all species. Thus we highlight a potential impediment to translational approaches that aim to engineer traits linked to the accumulation of compatible solutes. Finally, a broad comparison of the metabolic changes elicited by drought and salt acclimation revealed partial conservation of these metabolic stress responses within each of the Lotus species, but only few salt- and drought-responsive metabolites were shared between all. The implications of these results are discussed with regard to the current insights into legume water stress physiology. © 2011 Blackwell Publishing Ltd.

  3. Overview, comparative assessment and recommendations of forecasting models for short-term water demand prediction

    CSIR Research Space (South Africa)

    Anele, AO

    2017-11-01

    Full Text Available -term water demand (STWD) forecasts. In view of this, an overview of forecasting methods for STWD prediction is presented. Based on that, a comparative assessment of the performance of alternative forecasting models from the different methods is studied. Times...

  4. Comparative study of chemo-electro-mechanical transport models for an electrically stimulated hydrogel

    International Nuclear Information System (INIS)

    Elshaer, S E; Moussa, W A

    2014-01-01

    The main objective of this work is to introduce a new expression for the hydrogel’s hydration for use within the Poisson Nernst–Planck chemo electro mechanical (PNP CEM) transport models. This new contribution to the models support large deformation by considering the higher order terms in the Green–Lagrangian strain tensor. A detailed discussion of the CEM transport models using Poisson Nernst–Planck (PNP) and Poisson logarithmic Nernst–Planck (PLNP) equations for chemically and electrically stimulated hydrogels will be presented. The assumptions made to simplify both CEM transport models for electric field application in the order of 0.833 kV m −1 and a highly diluted electrolyte solution (97% is water) will be explained. This PNP CEM model has been verified accurately against experimental and numerical results. In addition, different definitions for normalizing the parameters are used to derive the dimensionless forms of both the PNP and PLNP CEM. Four models, PNP CEM, PLNP CEM, dimensionless PNP CEM and dimensionless PNLP CEM transport models were employed on an axially symmetric cylindrical hydrogel problem with an aspect ratio (diameter to thickness) of 175:3. The displacement and osmotic pressure obtained for the four models are compared against the variation of the number of elements for finite element analysis, simulation duration and solution rate when using the direct numerical solver. (papers)

  5. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available Group Fallback only No Damballa, 2008 Crime Case studies Lone No No Owens et al, 2009 Warfare Literature Group Yes Yes Croom, 2010 Crime (APT) Case studies Group No No Dreijer, 2011 Warfare Previous models and case studies Group Yes No Van... be needed by a geographically or functionally distributed group of attackers. While some of the models describe the installation of a backdoor or an advanced persistent threat (APT), none of them describe the behaviour involved in returning to a...

  6. The Development of Working Memory: Further Note on the Comparability of Two Models of Working Memory.

    Science.gov (United States)

    de Ribaupierre, Anik; Bailleux, Christine

    2000-01-01

    Summarizes similarities and differences between the working memory models of Pascual-Leone and Baddeley. Debates whether each model makes a specific contribution to explanation of Kemps, De Rammelaere, and Desmet's results. Argues for necessity of theoretical task analyses. Compares a study similar to that of Kemps et al. in which different…

  7. The new ICRP respiratory model for radiation protection (ICRP 66) : applications and comparative evaluations

    International Nuclear Information System (INIS)

    Castellani, C.M.; Luciani, A.

    1996-02-01

    The aim of this report is to present the New ICRP Respiratory Model Tract for Radiological Protection. The model allows considering anatomical and physiological characteristics, giving reference values for children aged 3 months, 1, 5,10, and 15 years for adults; it also takes into account aerosol and gas characteristics. After a general description of the model structure, deposition, clearance and dosimetric models are presented. To compare the new and previous model (ICRP 30), dose coefficients (committed effective dose for unit intake) foe inhalation of radionuclides by workers are calculated considering aerosol granulometries with activity median aerodynamic of 1 and 5 μm, reference values for the respective publications. Dose coefficients and annual limits of intakes concerning respective dose limits (50 and 20 mSv respectively for ICRP 26 and 60) for workers and for members of population in case of dispersion of fission products aerosols, are finally calculated

  8. Comparing risk of failure models in water supply networks using ROC curves

    International Nuclear Information System (INIS)

    Debon, A.; Carrion, A.; Cabrera, E.; Solano, H.

    2010-01-01

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  9. Comparing risk of failure models in water supply networks using ROC curves

    Energy Technology Data Exchange (ETDEWEB)

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  10. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    Science.gov (United States)

    Haas, Kevin A.; Warner, John C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales.

  11. Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11

    Science.gov (United States)

    Ashton, G.; Jones, D. I.; Prix, R.

    2016-05-01

    We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.

  12. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches.

    Science.gov (United States)

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.

  13. Comparative analysis between Hec-RAS models and IBER in the hydraulic assessment of bridges

    OpenAIRE

    Rincón, Jean; Pérez, María; Delfín, Guillermo; Freitez, Carlos; Martínez, Fabiana

    2017-01-01

    This work aims to perform a comparative analysis between the Hec-RAS and IBER models, in the hydraulic evaluation of rivers with structures such as bridges. The case of application was the La Guardia creek, located in the road that communicates the cities of Barquisimeto-Quíbor, Venezuela. The first phase of the study consisted in the comparison of the models from the conceptual point of view and the management of both. The second phase focused on the case study, and the comparison of ...

  14. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  15. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  16. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2014-02-01

    Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.

  17. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  18. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  19. Comparative modelling and molecular docking of nitrate reductase from Bacillus weihenstephanensis (DS45

    Directory of Open Access Journals (Sweden)

    R. Seenivasagan

    2016-07-01

    Full Text Available Nitrate reductase catalyses the oxidation of NAD(PH and the reduction of nitrate to nitrite. NR serves as a central point for the integration of metabolic pathways by governing the flux of reduced nitrogen through several regulatory mechanisms in plants, algae and fungi. Bacteria express nitrate reductases that convert nitrate to nitrite, but mammals lack these specific enzymes. The microbial nitrate reductase reduces toxic compounds to nontoxic compounds with the help of NAD(PH. In the present study, our results revealed that Bacillus weihenstephanensis expresses a nitrate reductase enzyme, which was made to generate the 3D structure of the enzyme. Six different modelling servers, namely Phyre2, RaptorX, M4T Server, HHpred, SWISS MODEL and Mod Web, were used for comparative modelling of the structure. The model was validated with standard parameters (PROCHECK and Verify 3D. This study will be useful in the functional characterization of the nitrate reductase enzyme and its docking with nitrate molecules, as well as for use with autodocking.

  20. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  1. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  2. Writ in water, lines in sand: Ancient trade routes, models and comparative evidence

    Directory of Open Access Journals (Sweden)

    Eivind Heldaas Seland

    2015-12-01

    Full Text Available Historians and archaeologists often take connectivity for granted, and fail to address the problems of documenting patterns of movement. This article highlights the methodological challenges of reconstructing trade routes in prehistory and early history. The argument is made that these challenges are best met through the application of modern models of connectivity, in combination with the conscious use of comparative approaches.

  3. COMPARING OF DEPOSIT MODEL AND LIFE INSURANCE MODEL IN MACEDONIA

    Directory of Open Access Journals (Sweden)

    TATJANA ATANASOVA-PACHEMSKA

    2016-02-01

    Full Text Available In conditions of the continuous decline of the interest rates for bank deposits, and at a time when uncertainty about the future is increasing, physical and legal persons have doubts how to secure their future or how and where to invest their funds and thus to “fertilize” and increase their savings. Individuals usually choose to put their savings in the bank for a certain period, and for that period to receive certain interest, or decide to invest their savings in different types of life insurance and thus to "take care" of their life, their future and the future of their families. In mathematics are developed many models that relate to the compounding and the insurance. This paper is a comparison of the deposit model and the model of life insurance

  4. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  5. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  6. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  7. Comparing the relative cost-effectiveness of diagnostic studies: a new model

    International Nuclear Information System (INIS)

    Patton, D.D.; Woolfenden, J.M.; Wellish, K.L.

    1986-01-01

    We have developed a model to compare the relative cost-effectiveness of two or more diagnostic tests. The model defines a cost-effectiveness ratio (CER) for a diagnostic test as the ratio of effective cost to base cost, only dollar costs considered. Effective cost includes base cost, cost of dealing with expected side effects, and wastage due to imperfect test performance. Test performance is measured by diagnostic utility (DU), a measure of test outcomes incorporating the decision-analytic variables sensitivity, specificity, equivocal fraction, disease probability, and outcome utility. Each of these factors affecting DU, and hence CER, is a local, not universal, value; these local values strongly affect CER, which in effect becomes a property of the local medical setting. When DU = +1 and there are no adverse effects, CER = 1 and the patient benefits from the test dollar for dollar. When there are adverse effects effective cost exceeds base cost, and for an imperfect test DU 1. As DU approaches 0 (worthless test), CER approaches infinity (no effectiveness at any cost). If DU is negative, indicating that doing the test at all would be detrimental, CER also becomes negative. We conclude that the CER model is a useful preliminary method for ranking the relative cost-effectiveness of diagnostic tests, and that the comparisons would best be done using local values; different groups might well arrive at different rankings. (Author)

  8. A Comparative Study of CFD Models of a Real Wind Turbine in Solar Chimney Power Plants

    Directory of Open Access Journals (Sweden)

    Ehsan Gholamalizadeh

    2017-10-01

    Full Text Available A solar chimney power plant consists of four main parts, a solar collector, a chimney, an energy storage layer, and a wind turbine. So far, several investigations on the performance of the solar chimney power plant have been conducted. Among them, different approaches have been applied to model the turbine inside the system. In particular, a real wind turbine coupled to the system was simulated using computational fluid dynamics (CFD in three investigations. Gholamalizadeh et al. simulated a wind turbine with the same blade profile as the Manzanares SCPP’s turbine (FX W-151-A blade profile, while a CLARK Y blade profile was modelled by Guo et al. and Ming et al. In this study, simulations of the Manzanares prototype were carried out using the CFD model developed by Gholamalizadeh et al. Then, results obtained by modelling different turbine blade profiles at different turbine rotational speeds were compared. The results showed that a turbine with the CLARK Y blade profile significantly overestimates the value of the pressure drop across the Manzanares prototype turbine as compared to the FX W-151-A blade profile. In addition, modelling of both blade profiles led to very similar trends in changes in turbine efficiency and power output with respect to rotational speed.

  9. Kinetic Monte Carlo simulations compared with continuum models and experimental properties of pattern formation during ion beam sputtering

    International Nuclear Information System (INIS)

    Chason, E; Chan, W L

    2009-01-01

    Kinetic Monte Carlo simulations model the evolution of surfaces during low energy ion bombardment using atomic level mechanisms of defect formation, recombination and surface diffusion. Because the individual kinetic processes are completely determined, the resulting morphological evolution can be directly compared with continuum models based on the same mechanisms. We present results of simulations based on a curvature-dependent sputtering mechanism and diffusion of mobile surface defects. The results are compared with a continuum linear instability model based on the same physical processes. The model predictions are found to be in good agreement with the simulations for predicting the early-stage morphological evolution and the dependence on processing parameters such as the flux and temperature. This confirms that the continuum model provides a reasonable approximation of the surface evolution from multiple interacting surface defects using this model of sputtering. However, comparison with experiments indicates that there are many features of the surface evolution that do not agree with the continuum model or simulations, suggesting that additional mechanisms are required to explain the observed behavior.

  10. Comparative analysis of insect succession data from Victoria (Australia) using summary statistics versus preceding mean ambient temperature models.

    Science.gov (United States)

    Archer, Mel

    2014-03-01

    Minimum postmortem interval (mPMI) can be estimated with preceding mean ambient temperature models that predict carrion taxon pre-appearance interval. But accuracy has not been compared with using summary statistics (mean ± SD of taxon arrival/departure day, range, 95% CI). This study collected succession data from ten experimental and five control (infrequently sampled) pig carcasses over two summers (n = 2 experimental, n = 1 control per placement date). Linear and exponential preceding mean ambient temperature models for appearance and departure times were constructed for 17 taxa/developmental stages. There was minimal difference in linear or exponential model success, although arrival models were more often significant: 65% of linear arrival (r2 = 0.09–0.79) and exponential arrival models (r2 = 0.05–81.0) were significant, and 35% of linear departure (r2 = 0.0–0.71) and exponential departure models (r2 = 0.0–0.72) were significant. Performance of models and summary statistics for estimating mPMI was compared in two forensic cases. Only summary statistics produced accurate mPMI estimates.

  11. Multi-indication Pharmacotherapeutic Multicriteria Decision Analytic Model for the Comparative Formulary Inclusion of Proton Pump Inhibitors in Qatar.

    Science.gov (United States)

    Al-Badriyeh, Daoud; Alabbadi, Ibrahim; Fahey, Michael; Al-Khal, Abdullatif; Zaidan, Manal

    2016-05-01

    The formulary inclusion of proton pump inhibitors (PPIs) in the government hospital health services in Qatar is not comparative or restricted. Requests to include a PPI in the formulary are typically accepted if evidence of efficacy and tolerability is presented. There are no literature reports of a PPI scoring model that is based on comparatively weighted multiple indications and no reports of PPI selection in Qatar or the Middle East. This study aims to compare first-line use of the PPIs that exist in Qatar. The economic effect of the study recommendations was also quantified. A comparative, evidence-based multicriteria decision analysis (MCDA) model was constructed to follow the multiple indications and pharmacotherapeutic criteria of PPIs. Literature and an expert panel informed the selection criteria of PPIs. Input from the relevant local clinician population steered the relative weighting of selection criteria. Comparatively scored PPIs, exceeding a defined score threshold, were recommended for selection. Weighted model scores were successfully developed, with 95% CI and 5% margin of error. The model comprised 7 main criteria and 38 subcriteria. Main criteria are indication, dosage frequency, treatment duration, best published evidence, available formulations, drug interactions, and pharmacokinetic and pharmacodynamic properties. Most weight was achieved for the indications selection criteria. Esomeprazole and rabeprazole were suggested as formulary options, followed by lansoprazole for nonformulary use. The estimated effect of the study recommendations was up to a 15.3% reduction in the annual PPI expenditure. Robustness of study conclusions against variabilities in study inputs was confirmed via sensitivity analyses. The implementation of a locally developed PPI-specific comparative MCDA scoring model, which is multiweighted indication and criteria based, into the Qatari formulary selection practices is a successful evidence-based cost-cutting exercise

  12. Comparability of results from pair and classical model formulations for different sexually transmitted infections.

    Directory of Open Access Journals (Sweden)

    Jimmy Boon Som Ong

    Full Text Available The "classical model" for sexually transmitted infections treats partnerships as instantaneous events summarized by partner change rates, while individual-based and pair models explicitly account for time within partnerships and gaps between partnerships. We compared predictions from the classical and pair models over a range of partnership and gap combinations. While the former predicted similar or marginally higher prevalence at the shortest partnership lengths, the latter predicted self-sustaining transmission for gonorrhoea (GC and Chlamydia (CT over much broader partnership and gap combinations. Predictions on the critical level of condom use (C(c required to prevent transmission also differed substantially when using the same parameters. When calibrated to give the same disease prevalence as the pair model by adjusting the infectious duration for GC and CT, and by adjusting transmission probabilities for HIV, the classical model then predicted much higher C(c values for GC and CT, while C(c predictions for HIV were fairly close. In conclusion, the two approaches give different predictions over potentially important combinations of partnership and gap lengths. Assuming that it is more correct to explicitly model partnerships and gaps, then pair or individual-based models may be needed for GC and CT since model calibration does not resolve the differences.

  13. Is tuberculosis treatment really free in China? A study comparing two areas with different management models.

    Directory of Open Access Journals (Sweden)

    Sangsang Qiu

    Full Text Available China has implemented a free-service policy for tuberculosis. However, patients still have to pay a substantial proportion of their annual income for treatment of this disease. This study describes the economic burden on patients with tuberculosis; identifies related factors by comparing two areas with different management models; and provides policy recommendation for tuberculosis control reform in China.There are three tuberculosis management models in China: the tuberculosis dispensary model, specialist model and integrated model. We selected Zhangjiagang (ZJG and Taixing (TX as the study sites, which correspond to areas implementing the integrated model and dispensary model, respectively. Patients diagnosed and treated for tuberculosis since January 2010 were recruited as study subjects. A total of 590 patients (316 patients from ZJG and 274 patients from TX were interviewed with a response rate of 81%. The economic burden attributed to tuberculosis, including direct costs and indirect costs, was estimated and compared between the two study sites. The Mann-Whitney U Test was used to compare the cost differences between the two groups. Potential factors related to the total out-of-pocket costs were analyzed based on a step-by-step multivariate linear regression model after the logarithmic transformation of the costs.The average (median, interquartile range total cost was 18793.33 (9965, 3200-24400 CNY for patients in ZJG, which was significantly higher than for patients in TX (mean: 6598.33, median: 2263, interquartile range: 983-6688 (Z = 10.42, P < 0.001. After excluding expenses covered by health insurance, the average out-of-pocket costs were 14304.4 CNY in ZJG and 5639.2 CNY in TX. Based on the multivariable linear regression analysis, factors related to the total out-of-pocket costs were study site, age, number of clinical visits, residence, diagnosis delay, hospitalization, intake of liver protective drugs and use of the second

  14. Comparative growth models of big-scale sand smelt (Atherina boyeri Risso, 1810 sampled from Hirfanll Dam Lake, Klrsehir, Ankara, Turkey

    Directory of Open Access Journals (Sweden)

    S. Benzer

    2017-06-01

    Full Text Available In this current publication the growth characteristics of big-scale sand smelt data were compared for population dynamics within artificial neural networks and length-weight relationships models. This study aims to describe the optimal decision of the growth model of big-scale sand smelt by artificial neural networks and length-weight relationships models at Hirfanll Dam Lake, Klrsehir, Turkey. There were a total of 1449 samples collected from Hirfanll Dam Lake between May 2015 and May 2016. Both model results were compared with each other and the results were also evaluated with MAPE (mean absolute percentage error, MSE (mean squared error and r2 (coefficient correlation data as a performance criterion. The results of the current study show that artificial neural networks is a superior estimation tool compared to length-weight relationships models of big-scale sand smelt in Hirfanll Dam Lake.

  15. Comparative Analysis of Pain Behaviours in Humanized Mouse Models of Sickle Cell Anemia.

    Directory of Open Access Journals (Sweden)

    Jianxun Lei

    Full Text Available Pain is a hallmark feature of sickle cell anemia (SCA but management of chronic as well as acute pain remains a major challenge. Mouse models of SCA are essential to examine the mechanisms of pain and develop novel therapeutics. To facilitate this effort, we compared humanized homozygous BERK and Townes sickle mice for the effect of gender and age on pain behaviors. Similar to previously characterized BERK sickle mice, Townes sickle mice show more mechanical, thermal, and deep tissue hyperalgesia with increasing age. Female Townes sickle mice demonstrate more hyperalgesia compared to males similar to that reported for BERK mice and patients with SCA. Mechanical, thermal and deep tissue hyperalgesia increased further after hypoxia/reoxygenation (H/R treatment in Townes sickle mice. Together, these data show BERK sickle mice exhibit a significantly greater degree of hyperalgesia for all behavioral measures as compared to gender- and age-matched Townes sickle mice. However, the genetically distinct "knock-in" strategy of human α and β transgene insertion in Townes mice as compared to BERK mice, may provide relative advantage for further genetic manipulations to examine specific mechanisms of pain.

  16. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  17. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  18. Comparing the reported burn conditions for different severity burns in porcine models: a systematic review.

    Science.gov (United States)

    Andrews, Christine J; Cuttle, Leila

    2017-12-01

    There are many porcine burn models that create burns using different materials (e.g. metal, water) and different burn conditions (e.g. temperature and duration of exposure). This review aims to determine whether a pooled analysis of these studies can provide insight into the burn materials and conditions required to create burns of a specific severity. A systematic review of 42 porcine burn studies describing the depth of burn injury with histological evaluation is presented. Inclusion criteria included thermal burns, burns created with a novel method or material, histological evaluation within 7 days post-burn and method for depth of injury assessment specified. Conditions causing deep dermal scald burns compared to contact burns of equivalent severity were disparate, with lower temperatures and shorter durations reported for scald burns (83°C for 14 seconds) compared to contact burns (111°C for 23 seconds). A valuable archive of the different mechanisms and materials used for porcine burn models is presented to aid design and optimisation of future models. Significantly, this review demonstrates the effect of the mechanism of injury on burn severity and that caution is recommended when burn conditions established by porcine contact burn models are used by regulators to guide scald burn prevention strategies. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  19. Comparing cycling world hour records, 1967-1996: modeling with empirical data.

    Science.gov (United States)

    Bassett, D R; Kyle, C R; Passfield, L; Broker, J P; Burke, E R

    1999-11-01

    The world hour record in cycling has increased dramatically in recent years. The present study was designed to compare the performances of former/current record holders, after adjusting for differences in aerodynamic equipment and altitude. Additionally, we sought to determine the ideal elevation for future hour record attempts. The first step was constructing a mathematical model to predict power requirements of track cycling. The model was based on empirical data from wind-tunnel tests, the relationship of body size to frontal surface area, and field power measurements using a crank dynamometer (SRM). The model agreed reasonably well with actual measurements of power output on elite cyclists. Subsequently, the effects of altitude on maximal aerobic power were estimated from published research studies of elite athletes. This information was combined with the power requirement equation to predict what each cyclist's power output would have been at sea level. This allowed us to estimate the distance that each rider could have covered using state-of-the-art equipment at sea level. According to these calculations, when racing under equivalent conditions, Rominger would be first, Boardman second, Merckx third, and Indurain fourth. In addition, about 60% of the increase in hour record distances since Bracke's record (1967) have come from advances in technology and 40% from physiological improvements. To break the current world hour record, field measurements and the model indicate that a cyclist would have to deliver over 440 W for 1 h at sea level, or correspondingly less at altitude. The optimal elevation for future hour record attempts is predicted to be about 2500 m for acclimatized riders and 2000 m for unacclimatized riders.

  20. Comparative study of economics of different models of family size biogas plants for state of Punjab, India

    International Nuclear Information System (INIS)

    Singh, K. Jatinder; Sooch, Sarbjit Singh

    2004-01-01

    Biogas, the end product of anaerobic digestion of cattle dung, can successfully supplement the cooking fuels in the countryside areas of India, where the raw material needed for its production is plentifully available. Because of the lack of awareness regarding selection of a suitable model and size of biogas plant, the full potential of the biogas producing material is not harnessed, and the economic viability of biogas technology is rendered doubtful. To facilitate this decision making, the economics of family size biogas plants, i.e. with capacity from 1 to 6 m 3 , was studied, and three prevalent models, viz. KVIC, Janta and Deenbandu, were compared. Calculations for installation cost and annual operational cost were made for the state of Punjab, India, where the hydraulic retention time is 40 days, and current market prices were taken into account. Comparison of the economics revealed that the cost of installation and annual operational cost of each capacity were higher for the KVIC model, followed by the Janta and then the Deenbandhu model. Irrespective of the model, as the capacity of the biogas plant increases, the installation, as well as the annual operational cost increase proportionately. With increase in capacity, the payback period decreased exponentially with the exponential character being highest for the KVIC model, followed by the Janta and then the Deenbandhu model. However, on the basis of comparative economics, the Deenbandhu model was found to be the cheapest and most viable model of biogas plant

  1. Prospective comparative effectiveness cohort study comparing two models of advance care planning provision for Australian community aged care clients.

    Science.gov (United States)

    Detering, Karen Margaret; Carter, Rachel Zoe; Sellars, Marcus William; Lewis, Virginia; Sutton, Elizabeth Anne

    2017-12-01

    Conduct a prospective comparative effectiveness cohort study comparing two models of advance care planning (ACP) provision in community aged care: ACP conducted by the client's case manager (CM) ('Facilitator') and ACP conducted by an external ACP service ('Referral') over a 6-month period. This Australian study involved CMs and their clients. Eligible CM were English speaking, ≥18 years, had expected availability for the trial and worked ≥3 days per week. CMs were recruited via their organisations, sequentially allocated to a group and received education based on the group allocation. They were expected to initiate ACP with all clients and to facilitate ACP or refer for ACP. Outcomes were quantity of new ACP conversations and quantity and quality of new advance care directives (ACDs). 30 CMs (16 Facilitator, 14 Referral) completed the study; all 784 client's files (427 Facilitator, 357 Referral) were audited. ACP was initiated with 508 (65%) clients (293 Facilitator, 215 Referral; p<0.05); 89 (18%) of these (53 Facilitator, 36 Referral) and 41 (46%) (13 Facilitator, 28 Referral; p<0.005) completed ACDs. Most ACDs (71%) were of poor quality/not valid. A further 167 clients (facilitator 124; referral 43; p<0.005) reported ACP was in progress at study completion. While there were some differences, overall, models achieved similar outcomes. ACP was initiated with 65% of clients. However, fewer clients completed ACP, there was low numbers of ACDs and document quality was generally poor. The findings raise questions for future implementation and research into community ACP provision. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  3. Comparing satellite SAR and wind farm wake models

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Vincent, P.; Husson, R.

    2015-01-01

    . These extend several tens of kilometres downwind e.g. 70 km. Other SAR wind maps show near-field fine scale details of wake behind rows of turbines. The satellite SAR wind farm wake cases are modelled by different wind farm wake models including the PARK microscale model, the Weather Research and Forecasting...... (WRF) model in high resolution and WRF with coupled microscale parametrization....

  4. Comparative benefit of malaria chemoprophylaxis modelled in United Kingdom travellers.

    Science.gov (United States)

    Toovey, Stephen; Nieforth, Keith; Smith, Patrick; Schlagenhauf, Patricia; Adamcova, Miriam; Tatt, Iain; Tomianovic, Danitza; Schnetzler, Gabriel

    2014-01-01

    .3% decrease in estimated infections. The number of travellers experiencing moderate adverse events (AE) or those requiring medical attention or drug withdrawal per case prevented is as follows: C ± P 170, Mq 146, Dx 114, AP 103. The model correctly predicted the number of malaria deaths, providing a robust and reliable estimate of the number of imported malaria cases in the UK, and giving a measure of benefit derived from chemoprophylaxis use against the likely adverse events generated. Overall numbers needed to prevent a malaria infection are comparable among the four options and are sensitive to changes in the background infection rates. Only a limited impact on the number of infections can be expected if Mq is substituted by AP.

  5. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    Directory of Open Access Journals (Sweden)

    Christopher W. Walmsley

    2013-11-01

    Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different

  6. DISSECTING GALAXY FORMATION. II. COMPARING SUBSTRUCTURE IN PURE DARK MATTER AND BARYONIC MODELS

    International Nuclear Information System (INIS)

    Romano-Diaz, Emilio; Shlosman, Isaac; Heller, Clayton; Hoffman, Yehuda

    2010-01-01

    We compare the substructure evolution in pure dark matter (DM) halos with those in the presence of baryons, hereafter PDM and BDM models, respectively. The prime halos have been analyzed in the previous work. Models have been evolved from identical initial conditions which have been constructed by means of the constrained realization method. The BDM model includes star formation and feedback from stellar evolution onto the gas. A comprehensive catalog of subhalo populations has been compiled and individual and statistical properties of subhalos analyzed, including their orbital differences. We find that subhalo population mass functions in PDM and BDM are consistent with a single power law, M α sbh , for each of the models in the mass range of ∼2 x 10 8 M sun -2 x 10 11 M sun . However, we detect a nonnegligible shift between these functions, the time-averaged α ∼ -0.86 for the PDM and -0.98 for the BDM models. Overall, α appears to be a nearly constant in time, with variations of ±15%. Second, we find that the radial mass distribution of subhalo populations can be approximated by a power law, R γ sbh with a steepening that occurs at the radius of a maximal circular velocity, R vmax , in the prime halos. Here we find that γ sbh ∼ -1.5 for the PDM and -1 for the BDM models, when averaged over time inside R vmax . The slope is steeper outside this region and approaches -3. We detect little spatial bias (less than 10%) between the subhalo populations and the DM distribution of the main halos. Also, the subhalo population exhibits much less triaxiality in the presence of baryons, in tandem with the shape of the prime halo. Finally, we find that, counter-intuitively, the BDM population is depleted at a faster rate than the PDM one within the central 30 kpc of the prime halo. The reason for this is that although the baryons provide a substantial glue to the subhalos, the main halo exhibits the same trend. This assures a more efficient tidal disruption of the

  7. In silico models for predicting ready biodegradability under REACH: a comparative study.

    Science.gov (United States)

    Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio

    2013-10-01

    REACH (Registration Evaluation Authorization and restriction of Chemicals) legislation is a new European law which aims to raise the human protection level and environmental health. Under REACH all chemicals manufactured or imported for more than one ton per year must be evaluated for their ready biodegradability. Ready biodegradability is also used as a screening test for persistent, bioaccumulative and toxic (PBT) substances. REACH encourages the use of non-testing methods such as QSAR (quantitative structure-activity relationship) models in order to save money and time and to reduce the number of animals used for scientific purposes. Some QSAR models are available for predicting ready biodegradability. We used a dataset of 722 compounds to test four models: VEGA, TOPKAT, BIOWIN 5 and 6 and START and compared their performance on the basis of the following parameters: accuracy, sensitivity, specificity and Matthew's correlation coefficient (MCC). Performance was analyzed from different points of view. The first calculation was done on the whole dataset and VEGA and TOPKAT gave the best accuracy (88% and 87% respectively). Then we considered the compounds inside and outside the training set: BIOWIN 6 and 5 gave the best results for accuracy (81%) outside training set. Another analysis examined the applicability domain (AD). VEGA had the highest value for compounds inside the AD for all the parameters taken into account. Finally, compounds outside the training set and in the AD of the models were considered to assess predictive ability. VEGA gave the best accuracy results (99%) for this group of chemicals. Generally, START model gave poor results. Since BIOWIN, TOPKAT and VEGA models performed well, they may be used to predict ready biodegradability. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment

    Science.gov (United States)

    Warner, Zachary B.

    2013-01-01

    This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…

  9. Comparative analysis of the planar capacitor and IDT piezoelectric thin-film micro-actuator models

    International Nuclear Information System (INIS)

    Myers, Oliver J; Anjanappa, M; Freidhoff, Carl B

    2011-01-01

    A comparison of the analysis of similarly developed microactuators is presented. Accurate modeling and simulation techniques are vital for piezoelectrically actuated microactuators. Coupling analytical and numerical modeling techniques with variational design parameters, accurate performance predictions can be realized. Axi-symmetric two-dimensional and three-dimensional static deflection and harmonic models of a planar capacitor actuator are presented. Planar capacitor samples were modeled as unimorph diaphragms with sandwiched piezoelectric material. The harmonic frequencies were calculated numerically and compared well to predicted values and deformations. The finite element modeling reflects the impact of the d 31 piezoelectric constant. Two-dimensional axi-symmetric models of circularly interdigitated piezoelectrically membranes are also presented. The models include the piezoelectric material and properties, the membrane materials and properties, and incorporates various design considerations of the model. These models also include the electro-mechanical coupling for piezoelectric actuation and highlight a novel approach to take advantage of the higher d 33 piezoelectric coupling coefficient. Performance is evaluated for varying parameters such as electrode pitch, electrode width, and piezoelectric material thickness. The models also showed that several of the design parameters were naturally coupled. The static numerical models correlate well with the maximum static deflection of the experimental devices. Finally, this paper deals with the development of numerical harmonic models of piezoelectrically actuated planar capacitor and interdigitated diaphragms. The models were able to closely predict the first two harmonics, conservatively predict the third through sixth harmonics and predict the estimated values of center deflection using plate theory. Harmonic frequency and deflection simulations need further correlation by conducting extensive iterative

  10. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    Science.gov (United States)

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Charalambos Koukouvaos

    2014-01-01

    Full Text Available Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  12. Models of Purposive Human Organization: A Comparative Study

    Science.gov (United States)

    1984-02-01

    develop techniques for organizational diagnosis with the D-M model, to be followed by intervention by S-T methodology. 2. Introduction 2.1. Background In...relational and object data for Dinnat-Murphree model construction. 2. Develop techniques for organizational diagnosis with the Dinnat-Murphree model

  13. Comparing the dependability and associations with functioning of the DSM-5 Section III trait model of personality pathology and the DSM-5 Section II personality disorder model.

    Science.gov (United States)

    Chmielewski, Michael; Ruggero, Camilo J; Kotov, Roman; Liu, Keke; Krueger, Robert F

    2017-07-01

    Two competing models of personality psychopathology are included in the fifth edition of the Diagnostic Statistical Manual of Mental Disorders ( DSM-5 ; American Psychiatric Association, 2013); the traditional personality disorder (PD) model included in Section II and an alternative trait-based model included in Section III. Numerous studies have examined the validity of the alternative trait model and its official assessment instrument, the Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012). However, few studies have directly compared the trait-based model to the traditional PD model empirically in the same dataset. Moreover, to our knowledge, only a single study (Suzuki, Griffin, & Samuel, 2015) has examined the dependability of the PID-5, which is an essential component of construct validity for traits (Chmielewski & Watson, 2009; McCrae, Kurtz, Yamagata, & Terracciano, 2011). The current study directly compared the dependability of the DSM-5 traits, as assessed by the PID-5, and the traditional PD model, as assessed by the Personality Diagnostic Questionnaire-4 (PDQ-4+), in a large undergraduate sample. In addition, it evaluated and compared their associations with functioning, another essential component of personality pathology. In general, our findings indicate that most DSM-5 traits demonstrate high levels of dependability that are superior to the traditional PD model; however, some of the constructs assessed by the PID-5 may be more state like. The models were roughly equivalent in terms of their associations with functioning. The current results provide additional support for the validity of PID-5 and the DSM-5 Section III personality pathology model. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Comparative analysis of detection methods for congenital cytomegalovirus infection in a Guinea pig model.

    Science.gov (United States)

    Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R

    2013-01-01

    To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.

  15. Model-based meta-analysis for comparing Vitamin D2 and D3 parent-metabolite pharmacokinetics.

    Science.gov (United States)

    Ocampo-Pelland, Alanna S; Gastonguay, Marc R; Riggs, Matthew M

    2017-08-01

    Association of Vitamin D (D3 & D2) and its 25OHD metabolite (25OHD3 & 25OHD2) exposures with various diseases is an active research area. D3 and D2 dose-equivalency and each form's ability to raise 25OHD concentrations are not well-defined. The current work describes a population pharmacokinetic (PK) model for D2 and 25OHD2 and the use of a previously developed D3-25OHD3 PK model [1] for comparing D3 and D2-related exposures. Public-source D2 and 25OHD2 PK data in healthy or osteoporotic populations, including 17 studies representing 278 individuals (15 individual-level and 18 arm-level units), were selected using search criteria in PUBMED. Data included oral, single and multiple D2 doses (400-100,000 IU/d). Nonlinear mixed effects models were developed simultaneously for D2 and 25OHD2 PK (NONMEM v7.2) by considering 1- and 2-compartment models with linear or nonlinear clearance. Unit-level random effects and residual errors were weighted by arm sample size. Model simulations compared 25OHD exposures, following repeated D2 and D3 oral administration across typical dosing and baseline ranges. D2 parent and metabolite were each described by 2-compartment models with numerous parameter estimates shared with the D3-25OHD3 model [1]. Notably, parent D2 was eliminated (converted to 25OHD) through a first-order clearance whereas the previously published D3 model [1] included a saturable non-linear clearance. Similar to 25OHD3 PK model results [1], 25OHD2 was eliminated by a first-order clearance, which was almost twice as fast as the former. Simulations at lower baselines, following lower equivalent doses, indicated that D3 was more effective than D2 at raising 25OHD concentrations. Due to saturation of D3 clearance, however, at higher doses or baselines, the probability of D2 surpassing D3's ability to raise 25OHD concentrations increased substantially. Since 25OHD concentrations generally surpassed 75 nmol/L at these higher baselines by 3 months, there would be no

  16. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  17. Comparing personality disorder models: cross-method assessment of the FFM and DSM-IV-TR.

    Science.gov (United States)

    Samuel, Douglas B; Widiger, Thomas W

    2010-12-01

    The current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR; American Psychiatric Association, 2000) defines personality disorders as categorical entities that are distinct from each other and from normal personality traits. However, many scientists now believe that personality disorders are best conceptualized using a dimensional model of traits that span normal and abnormal personality, such as the Five-Factor Model (FFM). However, if the FFM or any dimensional model is to be considered as a credible alternative to the current model, it must first demonstrate an increment in the validity of the assessment offered within a clinical setting. Thus, the current study extended previous research by comparing the convergent and discriminant validity of the current DSM-IV-TR model to the FFM across four assessment methodologies. Eighty-eight individuals receiving ongoing psychotherapy were assessed for the FFM and the DSM-IV-TR personality disorders using self-report, informant report, structured interview, and therapist ratings. The results indicated that the FFM had an appreciable advantage over the DSM-IV-TR in terms of discriminant validity and, at the domain level, convergent validity. Implications of the findings and directions for future research are discussed.

  18. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  19. WORKABILITY OF A MANAGEMENT CONTROL MODEL IN SERVICE ORGANIZATIONS: A COMPARATIVE STUDY OF REACTIVE, PROACTIVE AND COACTIVE PHILOSOPHIES

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    2006-11-01

    Full Text Available The main objective of this study was to compare and contrast the three philosophies of management control models in the process of decision-making, namely reactive, proactive and the coactive. Research methodology was based on literature review and descriptive/exploratory approach. Additionally, a survey of 20 service organizations was carried out in order to make the analysis wider-reaching. In order to do that, the following steps were followed: firstly, fundamentals of reactive, proactive and coactive models were highlighted; secondly, management behaviors in the three approaches were compared, with concepts and their practical application being highlighted, thus retrieving managerial relationships in the organization. In so doing, we draw the hypothesis that middle and top managers who adopt control models that are distant from a more coactive one, usually spend a greater number of working hours in problem-solving, leaving little or no time for planning purposes. Finally, for study consolidation purpose, we have adopted qualitative data collection, whereby a content analysis was carried out with the assistance of six categories. Results have shown the need for a change in management paradigms so that firms are not only compared through financial perspectives, without considering the analysis of management control models which, according to this study, directly influence the operational results of the organizations.

  20. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  1. The Evolution of the Solar Magnetic Field: A Comparative Analysis of Two Models

    Science.gov (United States)

    McMichael, K. D.; Karak, B. B.; Upton, L.; Miesch, M. S.; Vierkens, O.

    2017-12-01

    Understanding the complexity of the solar magnetic cycle is a task that has plagued scientists for decades. However, with the help of computer simulations, we have begun to gain more insight into possible solutions to the plethora of questions inside the Sun. STABLE (Surface Transport and Babcock Leighton) is a newly developed 3D dynamo model that can reproduce features of the solar cycle. In this model, the tilted bipolar sunspots are formed on the surface (based on the toroidal field at the bottom of the convection zone) and then decay and disperse, producing the poloidal field. Since STABLE is a 3D model, it is able to solve the full induction equation in the entirety of the solar convection zone as well as incorporate many free parameters (such as spot depth and turbulent diffusion) which are difficult to observe. In an attempt to constrain some of these free parameters, we compare STABLE to a surface flux transport model called AFT (Advective Flux Transport) which solves the radial component of the magnetic field on the solar surface. AFT is a state-of-the-art surface flux transport model that has a proven record of being able to reproduce solar observations with great accuracy. In this project, we implement synthetic bipolar sunspots into both models, using identical surface parameters, and run the models for comparison. We demonstrate that the 3D structure of the sunspots in the interior and the vertical diffusion of the sunspot magnetic field play an important role in establishing the surface magnetic field in STABLE. We found that when a sufficient amount of downward magnetic pumping is included in STABLE, the surface magnetic field from this model becomes insensitive to the internal structure of the sunspot and more consistent with that of AFT.

  2. An economic model to compare the profitability of pay-per-use and fixed-fee licensing

    NARCIS (Netherlands)

    Postmus, Douwe; Wijngaard, Jacob; Wortmann, Hans

    This paper develops an economic model to compare the profitability of two strategies for the pricing of packaged software: fixed-fee and pay-per-use licensing. It is assumed that the market consists of a monopoly software vendor who is selling packaged software to Customers who are homogeneous in

  3. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  4. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  5. Fault diagnosis and comparing risk for the steel coil manufacturing process using statistical models for binary data

    International Nuclear Information System (INIS)

    Debón, A.; Carlos Garcia-Díaz, J.

    2012-01-01

    Advanced statistical models can help industry to design more economical and rational investment plans. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing. Increasingly stringent quality requirements in the automotive industry also require ongoing efforts in process control to make processes more robust. Robust methods for estimating the quality of galvanized steel coils are an important tool for the comprehensive monitoring of the performance of the manufacturing process. This study applies different statistical regression models: generalized linear models, generalized additive models and classification trees to estimate the quality of galvanized steel coils on the basis of short time histories. The data, consisting of 48 galvanized steel coils, was divided into sets of conforming and nonconforming coils. Five variables were selected for monitoring the process: steel strip velocity and four bath temperatures. The present paper reports a comparative evaluation of statistical models for binary data using Receiver Operating Characteristic (ROC) curves. A ROC curve is a graph or a technique for visualizing, organizing and selecting classifiers based on their performance. The purpose of this paper is to examine their use in research to obtain the best model to predict defective steel coil probability. In relation to the work of other authors who only propose goodness of fit statistics, we should highlight one distinctive feature of the methodology presented here, which is the possibility of comparing the different models with ROC graphs which are based on model classification performance. Finally, the results are validated by bootstrap procedures.

  6. Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)

    2012-07-01

    An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)

  7. Contextualizing Teacher Autonomy in Time and Space: A Model for Comparing Various Forms of Governing the Teaching Profession

    Science.gov (United States)

    Wermke, Wieland; Höstfält, Gabriella

    2014-01-01

    This study aims to develop a model for comparing different forms of teacher autonomy in various national contexts and at different times. Understanding and explaining local differences and global similarities in the teaching profession in a globalized world require conceptions that contribute to further theorization of comparative and…

  8. Modeling of Chromium (III) Removal from Heavy Metals Mixture Solutions in Continuous Flow Systems: A Comparative Study between BDST and Yoon -Nelson Models

    International Nuclear Information System (INIS)

    Ahmed, A.Z.

    2011-01-01

    The aim of this work is to study modeling of chromium (III) removal from aqueous solution using activated carbon as adsorbent. Studies have been conducted in a continuous fixed bed packed column under different operating conditions such as bed height, flow rate, fluid velocity and fixed adsorbent particle size. The Yoon Nelson model was applied to experimental data to predict the breakthrough curves by calculating the rate constant k and 50 % breakthrough time, θ. The Bed Depth Service Time (BDST) was applied to determine BDST constant K and the capacity of adsorbent, No. Results obtained from both models are compared with the experimental breakthrough curves and a satisfactory agreement was noticed. Therefore, the Yoon - Nelson and BDST models were found suitable for determining the parameters of the column design. The Y 000 - Nelson model was found more accurate in representing the system in comparison with the BDST model although it is less complicated than other models

  9. A statin a day keeps the doctor away: comparative proverb assessment modelling study

    Science.gov (United States)

    Mizdrak, Anja; Scarborough, Peter

    2013-01-01

    Objective To model the effect on UK vascular mortality of all adults over 50 years old being prescribed either a statin or an apple a day. Design Comparative proverb assessment modelling study. Setting United Kingdom. Population Adults aged over 50 years. Intervention Either a statin a day for people not already taking a statin or an apple a day for everyone, assuming 70% compliance and no change in calorie consumption. The modelling used routinely available UK population datasets; parameters describing the relations between statins, apples, and health were derived from meta-analyses. Main outcome measure Mortality due to vascular disease. Results The estimated annual reduction in deaths from vascular disease of a statin a day, assuming 70% compliance and a reduction in vascular mortality of 12% (95% confidence interval 9% to 16%) per 1.0 mmol/L reduction in low density lipoprotein cholesterol, is 9400 (7000 to 12 500). The equivalent reduction from an apple a day, modelled using the PRIME model (assuming an apple weighs 100 g and that overall calorie consumption remains constant) is 8500 (95% credible interval 6200 to 10 800). Conclusions Both nutritional and pharmaceutical approaches to the prevention of vascular disease may have the potential to reduce UK mortality significantly. With similar reductions in mortality, a 150 year old health promotion message is able to match modern medicine and is likely to have fewer side effects.

  10. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    Science.gov (United States)

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  11. COMPARATIVE INTERNATIONAL PERSPECTIVES ON MARKET-ORIENTED MODELS OF CORPORATE GOVERNANCE

    Directory of Open Access Journals (Sweden)

    Balaciu Diana

    2010-07-01

    Full Text Available The study of corporate governance requires not only the knowledge of economic, financial, managerial and sociological mechanisms and norms, but it must also incorporate an ethical dimension, while remaining aware of the demands of various stakeholders. The interest towards good governance practice is very present in the company laws of many countries. National differences may lead to specific attributes derived from the meaning that is given to the role of competition and market dispersion of capital. Based on a research consisting of a critical and comparative perspective, the present contribution is dominated by qualitative and mixed methods. In conclusion, it can be said that a market-oriented corporate governance model, though not part of the European Union’s convergence process, may very well respond to the increasing importance of investors’ rights and to the gradual evolution of corporate responsibilities, beyond the national context, with the aim of ensuring market liberalization.

  12. Comparative molecular analysis of early and late cancer cachexia-induced muscle wasting in mouse models.

    Science.gov (United States)

    Sun, Rulin; Zhang, Santao; Lu, Xing; Hu, Wenjun; Lou, Ning; Zhao, Yan; Zhou, Jia; Zhang, Xiaoping; Yang, Hongmei

    2016-12-01

    Cancer-induced muscle wasting, which commonly occurs in cancer cachexia, is characterized by impaired quality of life and poor patient survival. To identify an appropriate treatment, research on the mechanism underlying muscle wasting is essential. Thus far, studies on muscle wasting using cancer cachectic models have generally focused on early cancer cachexia (ECC), before severe body weight loss occurs. In the present study, we established models of ECC and late cancer cachexia (LCC) and compared different stages of cancer cachexia using two cancer cachectic mouse models induced by colon-26 (C26) adenocarcinoma or Lewis lung carcinoma (LLC). In each model, tumor-bearing (TB) and control (CN) mice were injected with cancer cells and PBS, respectively. The TB and CN mice, which were euthanized on the 24th day or the 36th day after injection, were defined as the ECC and ECC-CN mice or the LCC and LCC-CN mice. In addition, the tissues were harvested and analyzed. We found that both the ECC and LCC mice developed cancer cachexia. The amounts of muscle loss differed between the ECC and LCC mice. Moreover, the expression of some molecules was altered in the muscles from the LCC mice but not in those from the ECC mice compared with their CN mice. In conclusion, the molecules with altered expression in the muscles from the ECC and LCC mice were not exactly the same. These findings may provide some clues for therapy which could prevent the muscle wasting in cancer cachexia from progression to the late stage.

  13. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  14. Energy modeling and comparative assessment beyond the market

    International Nuclear Information System (INIS)

    Rogner, H.-H.; Langlois, L.; McDonald, A.; Jalal, I.

    2004-01-01

    Market participants engage in constant comparative assessment of prices, available supplies, consumer options. Such implicit comparative assessment is a sine qua non for decision making in, and the smooth function of, competitive markets, but it is not always sufficient for policy makers who make decisions based on priorities other than or in addition to market prices. Supplementary mechanisms are needed to make explicit, to expose for consideration and to incorporate into their decision making processes, broader factors that are not necessarily reflected directly in the market price of a good or service. These would include, for example, employment, environment, national security or trade considerations. They would include long-term considerations, e.g., global warming or greatly diminished future supplies of oil and gas. This paper explores different applications of comparative assessment beyond the market, reviews different approaches for accomplishing such evaluations, and presents some tools available for conducting various types of extra-market comparative assessment, including those currently in use by Member States of the IAEA.(author)

  15. Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation

    Science.gov (United States)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2015-04-01

    The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters

  16. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  17. Body fat measurement among Singaporean Chinese, Malays and Indians: a comparative study using a four-compartment model and different two-compartment models

    NARCIS (Netherlands)

    Deurenberg-Yap, M.; Schmidt, G.; Staveren, van W.A.; Hautvast, J.G.A.J.; Deurenberg, P.

    2001-01-01

    This cross-sectional study compared body fat percentage (BF€obtained from a four-compartment (4C) model with BF␏rom hydrometry (using 2H2O), dual-energy X-ray absorptiometry (DXA) and densitometry among the three main ethnic groups (Chinese, Malays and Indians) in Singapore, and determined the

  18. Comparing Models GRM, Refraction Tomography and Neural Network to Analyze Shallow Landslide

    Directory of Open Access Journals (Sweden)

    Armstrong F. Sompotan

    2011-11-01

    Full Text Available Detailed investigations of landslides are essential to understand fundamental landslide mechanisms. Seismic refraction method has been proven as a useful geophysical tool for investigating shallow landslides. The objective of this study is to introduce a new workflow using neural network in analyzing seismic refraction data and to compare the result with some methods; that are general reciprocal method (GRM and refraction tomography. The GRM is effective when the velocity structure is relatively simple and refractors are gently dipping. Refraction tomography is capable of modeling the complex velocity structures of landslides. Neural network is found to be more potential in application especially in time consuming and complicated numerical methods. Neural network seem to have the ability to establish a relationship between an input and output space for mapping seismic velocity. Therefore, we made a preliminary attempt to evaluate the applicability of neural network to determine velocity and elevation of subsurface synthetic models corresponding to arrival times. The training and testing process of the neural network is successfully accomplished using the synthetic data. Furthermore, we evaluated the neural network using observed data. The result of the evaluation indicates that the neural network can compute velocity and elevation corresponding to arrival times. The similarity of those models shows the success of neural network as a new alternative in seismic refraction data interpretation.

  19. Comparing Intrinsic Connectivity Models for the Primary Auditory Cortices

    Science.gov (United States)

    Hamid, Khairiah Abdul; Yusoff, Ahmad Nazlim; Mohamad, Mazlyfarina; Hamid, Aini Ismafairus Abd; Manan, Hanani Abd

    2010-07-01

    This fMRI study is about modeling the intrinsic connectivity between Heschl' gyrus (HG) and superior temporal gyrus (STG) in human primary auditory cortices. Ten healthy male subjects participated and required to listen to white noise stimulus during the fMRI scans. Two intrinsic connectivity models comprising bilateral HG and STG were constructed using statistical parametric mapping (SPM) and dynamic causal modeling (DCM). Group Bayes factor (GBF), positive evidence ratio (PER) and Bayesian model selection (BMS) for group studies were used in model comparison. Group results indicated significant bilateral asymmetrical activation (puncorr < 0.001) in HG and STG. Comparison results showed strong evidence of Model 2 as the preferred model (STG as the input center) with GBF value of 5.77 × 1073 The model is preferred by 6 out of 10 subjects. The results were supported by BMS results for group studies. One-sample t-test on connection values obtained from Model 2 indicates unidirectional parallel connections from STG to bilateral HG (p<0.05). Model 2 was determined to be the most probable intrinsic connectivity model between bilateral HG and STG when listening to white noise.

  20. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  1. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    Science.gov (United States)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  2. A web portal for accessing, viewing and comparing in situ observations, EO products and model output data

    Science.gov (United States)

    Vines, Aleksander; Hamre, Torill; Lygre, Kjetil

    2014-05-01

    The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. A key feature of the system is its ability to compare data from different datasets, including an option to upload one's own netCDF files. The user can for example search in an in situ database for different variables (like temperature, salinity, different elements, light, specific plankton types or rate measurements) with different criteria (bounding box, date/time, depth, Longhurst region, cruise/transect) and compare the data with model data. The user can choose model data or Earth observation data from a list, or upload his/her own netCDF files to use in the comparison. The data can be visualized on a map, as graphs and plots (e.g. time series and property-property plots), or downloaded in various formats. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. We have implemented a web-based GIS(Geographical Information Systems) system and want to demonstrate the use of this. The tool is designed for a wide range of users: Novice users, who want a simple way to be able to get basic information about the current state of the marine planktonic ecosystem by utilizing predefined queries and comparisons with models. Intermediate level users who want to explore the database on their own and customize the prefedined setups. Advanced users who want to perform complex queries and

  3. A comparative study on GM (1,1) and FRMGM (1,1) model in forecasting FBM KLCI

    Science.gov (United States)

    Ying, Sah Pei; Zakaria, Syerrina; Mutalib, Sharifah Sakinah Syed Abd

    2017-11-01

    FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBM KLCI) is a group of indexes combined in a standardized way and is used to measure the Malaysia overall market across the time. Although composite index can give ideas about stock market to investors, it is hard to predict accurately because it is volatile and it is necessary to identify a best model to forecast FBM KLCI. The objective of this study is to determine the most accurate forecasting model between GM (1,1) model and Fourier Residual Modification GM (1,1) (FRMGM (1,1)) model to forecast FBM KLCI. In this study, the actual daily closing data of FBM KLCI was collected from January 1, 2016 to March 15, 2016. GM (1,1) model and FRMGM (1,1) model were used to build the grey model and to test forecasting power of both models. Mean Absolute Percentage Error (MAPE) was used as a measure to determine the best model. Forecasted value by FRMGM (1,1) model do not differ much than the actual value compare to GM (1,1) model for in-sample and out-sample data. Results from MAPE also show that FRMGM (1,1) model is lower than GM (1,1) model for in-sample and out-sample data. These results shown that FRMGM (1,1) model is better than GM (1,1) model to forecast FBM KLCI.

  4. A comparative study in the UNCITRAL model law about the independence of the arbitration clause

    Directory of Open Access Journals (Sweden)

    Atefeh Darami Zadeh

    2018-02-01

    Full Text Available The aim of the paper was to investigate the independence of the arbitration clause from the main contract in the International Commercial Arbitration Law of Iran with a comparative study in the UNCITRAL model law. The effectiveness of this type of procedure, its coordination with the specific objectives and the special status of international traders has led to their increasing willingness to use this legal solution. We use a comparative method, quasi-experimental, to describe similarities and differences in variables in two or more existing groups in a natural setting; it resembles an experiment as it uses manipulation but lacks random assignment of individual subjects.  This study begins analyzing international arbitration and the UNCITRAL model rules (Chapters I to VI, then reviewing the national arbitration (Chapter V; thus, the effects of the principle of independence of the arbitration clause can be seen (Chapter VII and, later, the problems that arise (Chapters VIII to X. Even so, the main conclusion is that the parties usually agree to resolve their international disputes through arbitration, which is judged privately and universally accepted.

  5. Participation of mitochondrial diazepam binding inhibitor receptors in the anticonflict, antineophobic and anticonvulsant action of 2-aryl-3-indoleacetamide and imidazopyridine derivatives.

    Science.gov (United States)

    Auta, J; Romeo, E; Kozikowski, A; Ma, D; Costa, E; Guidotti, A

    1993-05-01

    The 2-hexyl-indoleacetamide derivative, FGIN-1-27 [N,N-di-n-hexyl-2- (4-fluorophenyl)indole-3-acetamide], and the imidazopyridine derivative, alpidem, both bind with high affinity to glial mitochondrial diazepam binding inhibitor receptors (MDR) and increase mitochondrial steroidogenesis. Although FGIN-1-27 is selective for the MDR, alpidem also binds to the allosteric modulatory site of the gamma-aminobutyric acidA receptor where the benzodiazepines bind. FGIN-1-27 and alpidem, like the neurosteroid 3 alpha,21-dehydroxy-5 alpha-pregnane-20-one (THDOC), clonazepam and zolpidem (the direct allosteric modulators of gamma-aminobutyric acidA receptors) delay the onset of isoniazid and metrazol-induced convulsions. The anti-isoniazid convulsant action of FGIN-1-27 and alpidem, but not that of THDOC, is blocked by PK 11195. In contrast, flumazenil blocked completely the anticonvulsant action of clonazepam and zolpidem and partially blocked that of alpidem, but it did not affect the anticonvulsant action of THDOC and FGIN-1-27. Alpidem, like clonazepam, zolpidem and diazepam, but not THDOC or FGIN-1-27, delay the onset of bicuculline-induced convulsions. In two animal models of anxiety, the neophobic behavior in the elevated plus maze test and the conflict-punishment behavior in the Vogel conflict test, THDOC and FGIN-1-27 elicited anxiolytic-like effects in a manner that is flumazenil insensitive, whereas alpidem elicited a similar anxiolytic effect, but is partially blocked by flumazenil. Whereas PK 11195 blocked the effect of FGIN-1-27 and partially blocked alpidem, it did not affect THDOC in both animal models of anxiety.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. Comparative analysis of different methods in mathematical modelling of the recuperative heat exchangers

    International Nuclear Information System (INIS)

    Debeljkovic, D.Lj.; Stevic, D.Z.; Simeunovic, G.V.; Misic, M.A.

    2015-01-01

    The heat exchangers are frequently used as constructive elements in various plants and their dynamics is very important. Their operation is usually controlled by manipulating inlet fluid temperatures or mass flow rates. On the basis of the accepted and critically clarified assumptions, a linearized mathematical model of the cross-flow heat exchanger has been derived, taking into account the wall dynamics. The model is based on the fundamental law of energy conservation, covers all heat accumulation storages in the process, and leads to the set of partial differential equations (PDE), which solution is not possible in closed form. In order to overcome the solutions difficulties in this paper are analyzed different methods for modeling the heat exchanger: approach based on Laplace transformation, approximation of partial differential equations based on finite differences, the method of physical discretization and the transport approach. Specifying the input temperatures and output variables, under the constant initial conditions, the step transient responses have been simulated and presented in graphic form in order to compare these results for the four characteristic methods considered in this paper, and analyze its practical significance. (author)

  7. Prediction of paddy drying kinetics: A comparative study between mathematical and artificial neural network modelling

    Directory of Open Access Journals (Sweden)

    Beigi Mohsen

    2017-01-01

    Full Text Available The present study aimed at investigation of deep bed drying of rough rice kernels at various thin layers at different drying air temperatures and flow rates. A comparative study was performed between mathematical thin layer models and artificial neural networks to estimate the drying curves of rough rice. The suitability of nine mathematical models in simulating the drying kinetics was examined and the Midilli model was determined as the best approach for describing drying curves. Different feed forward-back propagation artificial neural networks were examined to predict the moisture content variations of the grains. The ANN with 4-18-18-1 topology, transfer function of hyperbolic tangent sigmoid and a Levenberg-Marquardt back propagation training algorithm provided the best results with the maximum correlation coefficient and the minimum mean square error values. Furthermore, it was revealed that ANN modeling had better performance in prediction of drying curves with lower root mean square error values.

  8. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhangxing; Li, Yi; Shao, Yimin; Huang, Tianyu; Xu, Hongyi; Li, Yang; Chen, Wei; Zeng, Danielle; Avery, Katherine; Kang, HongTae; Su, Xuming

    2017-04-06

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus and the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.

  9. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    International Nuclear Information System (INIS)

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-01-01

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings

  10. Comparing different methods to model scenarios of future glacier change for the entire Swiss Alps

    Science.gov (United States)

    Linsbauer, A.; Paul, F.; Haeberli, W.

    2012-04-01

    There is general agreement that observed climate change already has strong impacts on the cryosphere. The rapid shrinkage of glaciers during the past two decades as observed in many mountain ranges globally and in particular in the Alps, are impressive confirmations of a changed climate. With the expected future temperature increase glacier shrinkage will likely further accelerate and their role as an important water resource more and more diminish. To determine the future contribution of glaciers to run-off with hydrological models, the change in glacier area and/or volume must be considered. As these models operate at regional scales, simplified approaches to model the future development of all glaciers in a mountain range need to be applied. In this study we have compared different simplified approaches to model the area and volume evolution of all glaciers in the Swiss Alps over the 21st century according to given climate change scenarios. One approach is based on an upward shift of the ELA (by 150 m per degree temperature increase) and the assumption that the glacier extent will shrink until the smaller accumulation area covers again 60% of the total glacier area. A second approach is based on observed elevation changes between 1985 and 2000 as derived from DEM differencing for all glaciers in Switzerland. With a related elevation-dependent parameterization of glacier thickness change and a modelled glacier thickness distribution, the 15-year trends in observed thickness loss are extrapolated into the future with glacier area loss taking place when thickness becomes zero. The models show an overall glacier area reduction between 60-80% until 2100 with some ice remaining at the highest elevations. However, compared to the ongoing temperature increase and considering that several reinforcement feedbacks (albedo lowering, lake formation) are not accounted for, the real area loss might even be stronger. Uncertainties in the modelled glacier thickness have only a

  11. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    Science.gov (United States)

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Comparing potential recharge estimates from three Land Surface Models across the Western US

    Science.gov (United States)

    NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.

    2018-01-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845

  13. Characterizing Cavities in Model Inclusion Fullerenes: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Francisco Torrens

    2001-06-01

    Full Text Available Abstract: The fullerene-82 cavity is selected as a model system in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecular surface, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and cubic lattice approach to the molecular volume. Accurate measures of the molecular volume and surface area have been performed with the pseudorandom Monte Carlo (MCVS and uniform Monte Carlo (UMCVS methods. These calculations serve as a reference for the rest of the methods. The SURMO2 method does not recognize the cavity and may not be convenient for intercalation compounds. The programs that detect the cavities never exceed 1% deviation relative to the reference value for molecular volume and 5% for surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the solvent-accessible surfaces has been calculated. Fullerene-82 is compared with fullerene-60 and -70.

  14. A comparative evaluation of Cone Beam Computed Tomography (CBCT) and Multi-Slice CT (MSCT). Part II: On 3D model accuracy

    International Nuclear Information System (INIS)

    Liang Xin; Lambrichts, Ivo; Sun Yi; Denis, Kathleen; Hassan, Bassam; Li Limin; Pauwels, Ruben; Jacobs, Reinhilde

    2010-01-01

    Aim: The study aim was to compare the geometric accuracy of three-dimensional (3D) surface model reconstructions between five Cone Beam Computed Tomography (CBCT) scanners and one Multi-Slice CT (MSCT) system. Materials and methods: A dry human mandible was scanned with five CBCT systems (NewTom 3G, Accuitomo 3D, i-CAT, Galileos, Scanora 3D) and one MSCT scanner (Somatom Sensation 16). A 3D surface bone model was created from the six systems. The reference (gold standard) 3D model was obtained with a high resolution laser surface scanner. The 3D models from the five systems were compared with the gold standard using a point-based rigid registration algorithm. Results: The mean deviation from the gold standard for MSCT was 0.137 mm and for CBCT were 0.282, 0.225, 0.165, 0.386 and 0.206 mm for the i-CAT, Accuitomo, NewTom, Scanora and Galileos, respectively. Conclusion: The results show that the accuracy of CBCT 3D surface model reconstructions is somewhat lower but acceptable comparing to MSCT from the gold standard.

  15. Feeding Behavior of Aplysia: A Model System for Comparing Cellular Mechanisms of Classical and Operant Conditioning

    Science.gov (United States)

    Baxter, Douglas A.; Byrne, John H.

    2006-01-01

    Feeding behavior of Aplysia provides an excellent model system for analyzing and comparing mechanisms underlying appetitive classical conditioning and reward operant conditioning. Behavioral protocols have been developed for both forms of associative learning, both of which increase the occurrence of biting following training. Because the neural…

  16. Canadian and United States regulatory models compared: doses from atmospheric pathways

    International Nuclear Information System (INIS)

    Peterson, S-R.

    1997-01-01

    CANDU reactors sold offshore are licensed primarily to satisfy Canadian Regulations. For radioactive emissions during normal operation, the Canadian Standards Association's CAN/CSA-N288.1-M87 is used. This standard provides guidelines and methodologies for calculating a rate of radionuclide release that exposes a member of the public to the annual dose limit. To calculate doses from air concentrations, either CSA-N288.1 or the Regulatory Guide 1.109 of the United States Nuclear Regulatory Commission, which has already been used to license light-water reactors in these countries, may be used. When dose predictions from CSA-N288.1 are compared with those from the U.S. Regulatory Guides, the differences in projected doses raise questions about the predictions. This report explains differences between the two models for ingestion, inhalation, external and immersion doses

  17. Comparing Epileptiform Behavior of Mesoscale Detailed Models and Population Models of Neocortex

    NARCIS (Netherlands)

    Visser, S.; Meijer, Hil Gaétan Ellart; Lee, Hyong C.; van Drongelen, Wim; van Putten, Michel Johannes Antonius Maria; van Gils, Stephanus A.

    2010-01-01

    Two models of the neocortex are developed to study normal and pathologic neuronal activity. One model contains a detailed description of a neocortical microcolumn represented by 656 neurons, including superficial and deep pyramidal cells, four types of inhibitory neurons, and realistic synaptic

  18. Comparative Analysis of Sectoral Innovation System and Diamond Model: The Case of Telecom Sector of Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezazadeh Mehrizi

    2008-08-01

    Full Text Available Porter’s model of Competitive advantage of nations (named as Diamond Model has been widely used and criticized as well, over recent two decades. On the other hand, non-mainstream economists have tried to propose new frameworks for industrial analysis, that among them, Sectoral Innovation System (SIS is one of the most influential ones. After proposing an assessment framework, we use this framework to compare SIS and Porter’s models and apply them to the case of second mobile operator in Iran. Briefly, SIS model sheds light on the innovation process and competence building and focuses on system failures that are of special importance in the context of developing countries, while Diamond model has the advantage of brining the production process and the influential role of government into focus, but each one has its own shortcomings for analyzing industrial development in developing countries and both of them fail to pay enough attention to foreign relations and international linkages.

  19. Criteria for comparing economic impact models of tourism

    NARCIS (Netherlands)

    Klijs, J.; Heijman, W.J.M.; Korteweg Maris, D.; Bryon, J.

    2012-01-01

    There are substantial differences between models of the economic impacts of tourism. Not only do the nature and precision of results vary, but data demands, complexity and underlying assumptions also differ. Often, it is not clear whether the models chosen are appropriate for the specific situation

  20. Modeling Conformal Growth in Photonic Crystals and Comparing to Experiment

    Science.gov (United States)

    Brzezinski, Andrew; Chen, Ying-Chieh; Wiltzius, Pierre; Braun, Paul

    2008-03-01

    Conformal growth, e.g. atomic layer deposition (ALD), of materials such as silicon and TiO2 on three dimensional (3D) templates is important for making photonic crystals. However, reliable calculations of optical properties as a function of the conformal growth, such as the optical band structure, are hampered by difficultly in accurately assessing a deposited material's spatial distribution. A widely used approximation ignores ``pinch off'' of precursor gas and assumes complete template infilling. Another approximation results in non-uniform growth velocity by employing iso-intensity surfaces of the 3D interference pattern used to create the template. We have developed an accurate model of conformal growth in arbitrary 3D periodic structures, allowing for arbitrary surface orientation. Results are compared with the above approximations and with experimentally fabricated photonic crystals. We use an SU8 polymer template created by 4-beam interference lithography, onto which various amounts of TiO2 are grown by ALD. Characterization is performed by analysis of cross-sectional scanning electron micrographs and by solid angle resolved optical spectroscopy.

  1. Identifying appropriate reference data models for comparative effectiveness research (CER) studies based on data from clinical information systems.

    Science.gov (United States)

    Ogunyemi, Omolola I; Meeker, Daniella; Kim, Hyeon-Eui; Ashish, Naveen; Farzaneh, Seena; Boxwala, Aziz

    2013-08-01

    The need for a common format for electronic exchange of clinical data prompted federal endorsement of applicable standards. However, despite obvious similarities, a consensus standard has not yet been selected in the comparative effectiveness research (CER) community. Using qualitative metrics for data retrieval and information loss across a variety of CER topic areas, we compare several existing models from a representative sample of organizations associated with clinical research: the Observational Medical Outcomes Partnership (OMOP), Biomedical Research Integrated Domain Group, the Clinical Data Interchange Standards Consortium, and the US Food and Drug Administration. While the models examined captured a majority of the data elements that are useful for CER studies, data elements related to insurance benefit design and plans were most detailed in OMOP's CDM version 4.0. Standardized vocabularies that facilitate semantic interoperability were included in the OMOP and US Food and Drug Administration Mini-Sentinel data models, but are left to the discretion of the end-user in Biomedical Research Integrated Domain Group and Analysis Data Model, limiting reuse opportunities. Among the challenges we encountered was the need to model data specific to a local setting. This was handled by extending the standard data models. We found that the Common Data Model from the OMOP met the broadest complement of CER objectives. Minimal information loss occurred in mapping data from institution-specific data warehouses onto the data models from the standards we assessed. However, to support certain scenarios, we found a need to enhance existing data dictionaries with local, institution-specific information.

  2. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  3. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    Science.gov (United States)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  4. Models for comparing lung-cancer risks in radon- and plutonium-exposed experimental animals

    International Nuclear Information System (INIS)

    Gilbert, E.S.; Cross, F.T.; Sanders, C.L.; Dagle, G.E.

    1990-10-01

    Epidemiologic studies of radon-exposed underground miners have provided the primary basis for estimating human lung-cancer risks resulting from radon exposure. These studies are sometimes used to estimate lung-cancer risks resulting from exposure to other alpha- emitters as well. The latter use, often referred to as the dosimetric approach, is based on the assumption that a specified dose to the lung produces the same lung-tumor risk regardless of the substance producing the dose. At Pacific Northwest Laboratory, experiments have been conducted in which laboratory rodents have been given inhalation exposures to radon and to plutonium ( 239 PuO 2 ). These experiments offer a unique opportunity to compare risks, and thus to investigate the validity of the dosimetric approach. This comparison is made most effectively by modeling the age-specific risk as a function of dose in a way that is comparable to analyses of human data. Such modeling requires assumptions about whether tumors are the cause of death or whether they are found incidental to death from other causes. Results based on the assumption that tumors are fatal indicate that the radon and plutonium dose-response curves differ, with a linear function providing a good description of the radon data, and a pure quadratic function providing a good description of the plutonium data. However, results based on the assumption that tumors are incidental to death indicate that the dose-response curves for the two exposures are very similar, and thus support the dosimetric approach. 14 refs., 2 figs., 6 tabs

  5. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  6. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions.

    Science.gov (United States)

    Haney, Nora M; Nguyen, Hoang M T; Honda, Matthew; Abdel-Mageed, Asim B; Hellstrom, Wayne J G

    2018-04-01

    It is common for men to develop erectile dysfunction after radical prostatectomy. The anatomy of the rat allows the cavernous nerve (CN) to be identified, dissected, and injured in a controlled fashion. Therefore, bilateral CN injury (BCNI) in the rat model is routinely used to study post-prostatectomy erectile dysfunction. To compare and contrast the available literature on pharmacologic intervention after BCNI in the rat. A literature search was performed on PubMed for cavernous nerve and injury and erectile dysfunction and rat. Only articles with BCNI and pharmacologic intervention that could be grouped into categories of immune modulation, growth factor therapy, receptor kinase inhibition, phosphodiesterase type 5 inhibition, and anti-inflammatory and antifibrotic interventions were included. To assess outcomes of pharmaceutical intervention on erectile function recovery after BCNI in the rat model. The ratio of maximum intracavernous pressure to mean arterial pressure was the main outcome measure chosen for this analysis. All interventions improved erectile function recovery after BCNI based on the ratio of maximum intracavernous pressure to mean arterial pressure results. Additional end-point analysis examined the corpus cavernosa and/or the major pelvic ganglion and CN. There was extreme heterogeneity within the literature, making accurate comparisons between crush injury and therapeutic interventions difficult. BCNI in the rat is the accepted animal model used to study nerve-sparing post-prostatectomy erectile dysfunction. However, an important limitation is extreme variability. Efforts should be made to decrease this variability and increase the translational utility toward clinical trials in humans. Haney NM, Nguyen HMT, Honda M, et al. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions. Sex Med Rev 2018;6:234-241. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier

  7. Comparing chemical reaction networks

    DEFF Research Database (Denmark)

    Cardelli, Luca; Tribastone, Mirco; Tschaikowski, Max

    2017-01-01

    We study chemical reaction networks (CRNs) as a kernel model of concurrency provided with semantics based on ordinary differential equations. We investigate the problem of comparing two CRNs, i.e., to decide whether the solutions of a source and of a target CRN can be matched for an appropriate...... choice of initial conditions. Using a categorical framework, we extend and unify model-comparison approaches based on dynamical (semantic) and structural (syntactic) properties of CRNs. Then, we provide an algorithm to compare CRNs, running linearly in time with respect to the cardinality of all possible...... comparisons. Finally, using a prototype implementation, CAGE, we apply our results to biological models from the literature....

  8. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  9. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  10. Creating a Common Data Model for Comparative Effectiveness with the Observational Medical Outcomes Partnership.

    Science.gov (United States)

    FitzHenry, F; Resnic, F S; Robbins, S L; Denton, J; Nookala, L; Meeker, D; Ohno-Machado, L; Matheny, M E

    2015-01-01

    Adoption of a common data model across health systems is a key infrastructure requirement to allow large scale distributed comparative effectiveness analyses. There are a growing number of common data models (CDM), such as Mini-Sentinel, and the Observational Medical Outcomes Partnership (OMOP) CDMs. In this case study, we describe the challenges and opportunities of a study specific use of the OMOP CDM by two health systems and describe three comparative effectiveness use cases developed from the CDM. The project transformed two health system databases (using crosswalks provided) into the OMOP CDM. Cohorts were developed from the transformed CDMs for three comparative effectiveness use case examples. Administrative/billing, demographic, order history, medication, and laboratory were included in the CDM transformation and cohort development rules. Record counts per person month are presented for the eligible cohorts, highlighting differences between the civilian and federal datasets, e.g. the federal data set had more outpatient visits per person month (6.44 vs. 2.05 per person month). The count of medications per person month reflected the fact that one system's medications were extracted from orders while the other system had pharmacy fills and medication administration records. The federal system also had a higher prevalence of the conditions in all three use cases. Both systems required manual coding of some types of data to convert to the CDM. The data transformation to the CDM was time consuming and resources required were substantial, beyond requirements for collecting native source data. The need to manually code subsets of data limited the conversion. However, once the native data was converted to the CDM, both systems were then able to use the same queries to identify cohorts. Thus, the CDM minimized the effort to develop cohorts and analyze the results across the sites.

  11. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Science.gov (United States)

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  12. Comparative BAC-based mapping in the white-throated sparrow, a novel behavioral genomics model, using interspecies overgo hybridization

    Directory of Open Access Journals (Sweden)

    Gonser Rusty A

    2011-06-01

    Full Text Available Abstract Background The genomics era has produced an arsenal of resources from sequenced organisms allowing researchers to target species that do not have comparable mapping and sequence information. These new "non-model" organisms offer unique opportunities to examine environmental effects on genomic patterns and processes. Here we use comparative mapping as a first step in characterizing the genome organization of a novel animal model, the white-throated sparrow (Zonotrichia albicollis, which occurs as white or tan morphs that exhibit alternative behaviors and physiology. Morph is determined by the presence or absence of a complex chromosomal rearrangement. This species is an ideal model for behavioral genomics because the association between genotype and phenotype is absolute, making it possible to identify the genomic bases of phenotypic variation. Findings We initiated a genomic study in this species by characterizing the white-throated sparrow BAC library via filter hybridization with overgo probes designed for the chicken, turkey, and zebra finch. Cross-species hybridization resulted in 640 positive sparrow BACs assigned to 77 chicken loci across almost all macro-and microchromosomes, with a focus on the chromosomes associated with morph. Out of 216 overgos, 36% of the probes hybridized successfully, with an average number of 3.0 positive sparrow BACs per overgo. Conclusions These data will be utilized for determining chromosomal architecture and for fine-scale mapping of candidate genes associated with phenotypic differences. Our research confirms the utility of interspecies hybridization for developing comparative maps in other non-model organisms.

  13. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of-fers major benefits, such as setting up one’s own business and the pos-sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  14. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem-ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of fers major benefits, such as setting up one’s own business and the pos sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  15. EVALUATION OF THE HTA CORE MODEL FOR NATIONAL HEALTH TECHNOLOGY ASSESSMENT REPORTS: COMPARATIVE STUDY AND EXPERIENCES FROM EUROPEAN COUNTRIES.

    Science.gov (United States)

    Kõrge, Kristina; Berndt, Nadine; Hohmann, Juergen; Romano, Florence; Hiligsmann, Mickael

    2017-01-01

    The health technology assessment (HTA) Core Model® is a tool for defining and standardizing the elements of HTA analyses within several domains for producing structured reports. This study explored the parallels between the Core Model and a national HTA report. Experiences from various European HTA agencies were also investigated to determine the Core Model's adaptability to national reports. A comparison between a national report on Genetic Counseling, produced by the Cellule d'expertise médicale Luxembourg, and the Core Model was performed to identify parallels in terms of relevant and comparable assessment elements (AEs). Semi-structured interviews with five representatives from European HTA agencies were performed to assess their user experiences with the Core Model. The comparative study revealed that 50 percent of the total number (n = 144) of AEs in the Core Model were relevant for the national report. Of these 144 AEs from the Core Model, 34 (24 percent) were covered in the national report. Some AEs were covered only partly. The interviewees emphasized flexibility in using the Core Model and stated that the most important aspects to be evaluated include characteristics of the disease and technology, clinical effectiveness, economic aspects, and safety. In the present study, the national report covered an acceptable number of AEs of the Core Model. These results need to be interpreted with caution because only one comparison was performed. The Core Model can be used in a flexible manner, applying only those elements that are relevant from the perspective of the technology assessment and specific country context.

  16. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  17. Comparative analysis of hourly and dynamic power balancing models for validating future energy scenarios

    DEFF Research Database (Denmark)

    Pillai, Jayakrishnan R.; Heussen, Kai; Østergaard, Poul Alberg

    2011-01-01

    Energy system analyses on the basis of fast and simple tools have proven particularly useful for interdisciplinary planning projects with frequent iterations and re-evaluation of alternative scenarios. As such, the tool “EnergyPLAN” is used for hourly balanced and spatially aggregate annual......, the model is verified on the basis of the existing energy mix on Bornholm as an islanded energy system. Future energy scenarios for the year 2030 are analysed to study a feasible technology mix for a higher share of wind power. Finally, the results of the hourly simulations are compared to dynamic frequency...... simulations incorporating the Vehicle-to-grid technology. The results indicate how the EnergyPLAN model may be improved in terms of intra-hour variability, stability and ancillary services to achieve a better reflection of energy and power capacity requirements....

  18. Mathematical model comparing of the multi-level economics systems

    Science.gov (United States)

    Brykalov, S. M.; Kryanev, A. V.

    2017-12-01

    The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.

  19. Model-compared RGU-photometric space-densities in the direction to M 5 (l = 40, b = +470)

    International Nuclear Information System (INIS)

    Fenkart, R.; Karaali, S.

    1990-01-01

    In the process of rounding off the results homogeneously obtained within the model-comparison phase of the Basle Halo Program, space densities of both photometric populations, I and II, have been derived, for late-type giants and for main-sequence stars with +3 m m , in a field close to the globular cluster M 5, according to the RGU-photometric Basle method. Compared to the density gradients predicted by the standard set of five multi-component models, used since the beginning of this phase, they confirm the existence of a Galactic Thick Disk component, in this direction, too

  20. Development and assessment of multi-dimensional flow model in MARS compared with the RPI air-water experiment

    International Nuclear Information System (INIS)

    Lee, Seok Min; Lee, Un Chul; Bae, Sung Won; Chung, Bub Dong

    2004-01-01

    The Multi-Dimensional flow models in system code have been developed during the past many years. RELAP5-3D, CATHARE and TRACE has its specific multi-dimensional flow models and successfully applied it to the system safety analysis. In KAERI, also, MARS(Multi-dimensional Analysis of Reactor Safety) code was developed by integrating RELAP5/MOD3 code and COBRA-TF code. Even though COBRA-TF module can analyze three-dimensional flow models, it has a limitation to apply 3D shear stress dominant phenomena or cylindrical geometry. Therefore, Multi-dimensional analysis models are newly developed by implementing three-dimensional momentum flux and diffusion terms. The multi-dimensional model has been assessed compared with multi-dimensional conceptual problems and CFD code results. Although the assessment results were reasonable, the multi-dimensional model has not been validated to two-phase flow using experimental data. In this paper, the multi-dimensional air-water two-phase flow experiment was simulated and analyzed

  1. Comparing multiple model-derived aerosol optical properties to spatially collocated ground-based and satellite measurements

    Science.gov (United States)

    Ocko, Ilissa B.; Ginoux, Paul A.

    2017-04-01

    Anthropogenic aerosols are a key factor governing Earth's climate and play a central role in human-caused climate change. However, because of aerosols' complex physical, optical, and dynamical properties, aerosols are one of the most uncertain aspects of climate modeling. Fortunately, aerosol measurement networks over the past few decades have led to the establishment of long-term observations for numerous locations worldwide. Further, the availability of datasets from several different measurement techniques (such as ground-based and satellite instruments) can help scientists increasingly improve modeling efforts. This study explores the value of evaluating several model-simulated aerosol properties with data from spatially collocated instruments. We compare aerosol optical depth (AOD; total, scattering, and absorption), single-scattering albedo (SSA), Ångström exponent (α), and extinction vertical profiles in two prominent global climate models (Geophysical Fluid Dynamics Laboratory, GFDL, CM2.1 and CM3) to seasonal observations from collocated instruments (AErosol RObotic NETwork, AERONET, and Cloud-Aerosol Lidar with Orthogonal Polarization, CALIOP) at seven polluted and biomass burning regions worldwide. We find that a multi-parameter evaluation provides key insights on model biases, data from collocated instruments can reveal underlying aerosol-governing physics, column properties wash out important vertical distinctions, and improved models does not mean all aspects are improved. We conclude that it is important to make use of all available data (parameters and instruments) when evaluating aerosol properties derived by models.

  2. Palaeotemperature reconstructions of the European permafrost zone during Oxygen Isotope Stage 3 compared with climate model results.

    NARCIS (Netherlands)

    van Huissteden, J.; Vandenberghe, J.; Pollard, D.

    2003-01-01

    A palaeotemperature reconstruction based on periglacial phenomena in Europe north of approximately 51 °N, is compared with high-resolution regional climate model simulations of the marine oxygen isotope Stage 3 (Stage 3) palaeoclimate. The experiments represent Stage 3 warm (interstadial), Stage 3

  3. A comparative investigation of 18F kinetics in receptors: a compartment model analysis

    International Nuclear Information System (INIS)

    Tiwari, Anjani K.; Swatantra; Kaushik, A.; Mishra, A.K.

    2010-01-01

    Full text: Some authors reported that 18 F kinetics might be useful for evaluation of neuro receptors. We hypothesized that 18 F kinetics may show some information about neuronal damage, and each rate constant might have statistically significant correlation with WO function. The purpose of this study was to investigate 99m Tc MIBI kinetics through a compartment model analysis. Each rate constant from compartment analysis was compared with WO, T1/2, and (H/M) ratio in early and delayed phase. Different animal model were studied. After an injection the dynamic planar imaging was performed on a dual-headed digital gamma camera system for 30 minutes. An ROI was drawn manually to assess the global kinetics of 18 F. By using the time-activity curve (TAC) of ROI as a response tissue function and the TAC of Aorta as an input function, we analysed 18 F pharmacokinetics through a 2-compartment model. We defined k1 as influx rate constant, k2 as out flux rate constant and k3 as specific uptake rate constant. And we calculated k1/k2 as distribution volume (Vd), k1k3/k2 as specific uptake (SU), and k1k3/(k2+k3) as clearance. For non-competitive affinity studies of PET two modelling parameters distribution volume (DV) and Bmax / Kd are also calculated. Results: Statistically significant correlations were seen between k2 and T1/2 (P 18 F at the injection had relation to the uptake of it at 30 minutes and 2 hours after the injection. Furthermore, some indexes had statistically significant correlation with DV and Bmax. These compartment model approaches may be useful to estimate the other related studies

  4. A Model of Comparative Ethics Education for Social Workers

    Science.gov (United States)

    Pugh, Greg L.

    2017-01-01

    Social work ethics education models have not effectively engaged social workers in practice in formal ethical reasoning processes, potentially allowing personal bias to affect ethical decisions. Using two of the primary ethical models from medicine, a new social work ethics model for education and practical application is proposed. The strengths…

  5. Optimal designs for one- and two-color microarrays using mixed models: a comparative evaluation of their efficiencies.

    Science.gov (United States)

    Lima Passos, Valéria; Tan, Frans E S; Winkens, Bjorn; Berger, Martijn P F

    2009-01-01

    Comparative studies between the one- and two-color microarrays provide supportive evidence for similarities of results on differential gene expression. So far, no design comparisons between the two platforms have been undertaken. With the objective of comparing optimal designs of one- and two-color microarrays in their statistical efficiencies, techniques of design optimization were applied within a mixed model framework. A- and D-optimal designs for the one- and two-color platforms were sought for a 3 x 3 factorial experiment. The results suggest that the choice of the platform will not affect the "subjects to groups" allocation, being concordant in the two designs. However, under financial constraints, the two-color arrays are expected to have a slight upper hand in terms of efficiency of model parameters estimates, once the price of arrays is more expensive than that of subjects. This statement is especially valid for microarray studies envisaging class comparisons.

  6. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  7. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise

    Science.gov (United States)

    Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.

    2015-01-01

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351

  8. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  9. In vitro radiosensitivity of six human cell lines. A comparative study with different statistical models

    International Nuclear Information System (INIS)

    Fertil, B.; Deschavanne, P.J.; Lachet, B.; Malaise, E.P.

    1980-01-01

    The intrinsic radiosensitivity of human cell lines (five tumor and one nontransformed fibroblastic) was studied in vitro. The survival curves were fitted by the single-hit multitarget, the two-hit multitarget, the single-hit multitarget with initial slope, and the quadratic models. The accuracy of the experimental results permitted evaluation of the various fittings. Both a statistical test (comparison of variances left unexplained by the four models) and a biological consideration (check for independence of the fitted parameters vis-a-vis the portion of the survival curve in question) were carried out. The quadratic model came out best with each of them. It described the low-dose effects satisfactorily, revealing a single-hit lethal component. This finding and the fact that the six survival curves displayed a continuous curvature ruled out the adoption of the target models as well as the widely used linear regression. As calculated by the quadratic model, the parameters of the six cell lines lead to the following conclusions: (a) the intrinsic radiosensitivity varies greatly among the different cell lines; (b) the interpretation of the fibroblast survival curve is not basically different from that of the tumor cell lines; and (c) the radiosensitivity of these human cell lines is comparable to that of other mammalian cell lines

  10. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  11. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  12. The FluxCompensator: Making Radiative Transfer Models of Hydrodynamical Simulations Directly Comparable to Real Observations

    Science.gov (United States)

    Koepferl, Christine M.; Robitaille, Thomas P.

    2017-11-01

    When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.

  13. The FluxCompensator: Making Radiative Transfer Models of Hydrodynamical Simulations Directly Comparable to Real Observations

    Energy Technology Data Exchange (ETDEWEB)

    Koepferl, Christine M.; Robitaille, Thomas P., E-mail: koepferl@usm.lmu.de [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany)

    2017-11-01

    When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory . Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.

  14. A comparative behavioural study of mechanical hypersensitivity in 2 pain models in rats and humans.

    Science.gov (United States)

    Reitz, Marie-Céline; Hrncic, Dragan; Treede, Rolf-Detlef; Caspani, Ombretta

    2016-06-01

    The assessment of pain sensitivity in humans has been standardized using quantitative sensory testing, whereas in animals mostly paw withdrawal thresholds to diverse stimuli are measured. This study directly compares tests used in quantitative sensory testing (pinpricks, pressure algometer) with tests used in animal studies (electronic von Frey test: evF), which we applied to the dorsal hind limbs of humans after high frequency stimulation and rats after tibial nerve transection. Both experimental models induce profound mechanical hypersensitivity. At baseline, humans and rats showed a similar sensitivity to evF with 0.2 mm diameter tips, but significant differences for other test stimuli (all P pain models (P pain sensitivity, but probe size and shape should be standardized. Hypersensitivity to blunt pressure-the leading positive sensory sign after peripheral nerve injury in humans-is a novel finding in the tibial nerve transection model. By testing outside the primary zone of nerve damage (rat) or activation (humans), our methods likely involve effects of central sensitization in both species.

  15. Comparative Assessment of Two Vegetation Fractional Cover Estimating Methods and Their Impacts on Modeling Urban Latent Heat Flux Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2017-05-01

    Full Text Available Quantifying vegetation fractional cover (VFC and assessing its role in heat fluxes modeling using medium resolution remotely sensed data has received less attention than it deserves in heterogeneous urban regions. This study examined two approaches (Normalized Difference Vegetation Index (NDVI-derived and Multiple Endmember Spectral Mixture Analysis (MESMA-derived methods that are commonly used to map VFC based on Landsat imagery, in modeling surface heat fluxes in urban landscape. For this purpose, two different heat flux models, Two-source energy balance (TSEB model and Pixel Component Arranging and Comparing Algorithm (PCACA model, were adopted for model evaluation and analysis. A comparative analysis of the NDVI-derived and MESMA-derived VFCs showed that the latter achieved more accurate estimates in complex urban regions. When the two sources of VFCs were used as inputs to both TSEB and PCACA models, MESMA-derived urban VFC produced more accurate urban heat fluxes (Bowen ratio and latent heat flux relative to NDVI-derived urban VFC. Moreover, our study demonstrated that Landsat imagery-retrieved VFC exhibited greater uncertainty in obtaining urban heat fluxes for the TSEB model than for the PCACA model.

  16. Properties of the vacuum in models for QCD. Holography vs. resummed field theory. A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Zayakin, Andrey V.

    2011-01-17

    This Thesis is dedicated to a comparison of the two means of studying the electromagnetic properties of the QCD vacuum - holography and resummed field theory. I compare two classes of distinct models for the dynamics of the condensates. The first class consists of the so-called holographic models of QCD. Based upon the Maldacena conjecture, it tries to establish the properties of QCD correlation functions from the behavior of classical solutions of field equations in a higher-dimensional theory. Yet in many aspects the holographic approach has been found to be in an excellent agreement with data. These successes are the prediction of the very small viscosity-to-entropy ratio and the predictions of meson spectra up to 5% accuracy in several models. On the other hand, the resummation methods in field theory have not been discarded so far. Both classes of methods have access to condensates. Thus a comprehensive study of condensates becomes possible, in which I compare my calculations in holography and resummed field theory with each other, as well as with lattice results, field theory and experiment. I prove that the low-energy theorems of QCD keep their validity in holographic models with a gluon condensate in a non-trivial way. I also show that the so-called decoupling relation holds in holography models with chiral and gluon condensates, whereas this relation fails in the Dyson-Schwinger approach. On the contrary, my results on the chiral magnetic effect in holography disagree with the weak-field prediction; the chiral magnetic effect (that is, the electric current generation in a magnetic field) is three times less than the current in the weakly-coupled QCD. The chiral condensate behavior is found to be quadratic in external field both in the Dyson-Schwinger approach and in holography, yet we know that in the exact limit the condensate must be linear, thus both classes of models are concluded to be deficient for establishing the correct condensate behaviour in the

  17. Properties of the vacuum in models for QCD. Holography vs. resummed field theory. A comparative study

    International Nuclear Information System (INIS)

    Zayakin, Andrey V.

    2011-01-01

    This Thesis is dedicated to a comparison of the two means of studying the electromagnetic properties of the QCD vacuum - holography and resummed field theory. I compare two classes of distinct models for the dynamics of the condensates. The first class consists of the so-called holographic models of QCD. Based upon the Maldacena conjecture, it tries to establish the properties of QCD correlation functions from the behavior of classical solutions of field equations in a higher-dimensional theory. Yet in many aspects the holographic approach has been found to be in an excellent agreement with data. These successes are the prediction of the very small viscosity-to-entropy ratio and the predictions of meson spectra up to 5% accuracy in several models. On the other hand, the resummation methods in field theory have not been discarded so far. Both classes of methods have access to condensates. Thus a comprehensive study of condensates becomes possible, in which I compare my calculations in holography and resummed field theory with each other, as well as with lattice results, field theory and experiment. I prove that the low-energy theorems of QCD keep their validity in holographic models with a gluon condensate in a non-trivial way. I also show that the so-called decoupling relation holds in holography models with chiral and gluon condensates, whereas this relation fails in the Dyson-Schwinger approach. On the contrary, my results on the chiral magnetic effect in holography disagree with the weak-field prediction; the chiral magnetic effect (that is, the electric current generation in a magnetic field) is three times less than the current in the weakly-coupled QCD. The chiral condensate behavior is found to be quadratic in external field both in the Dyson-Schwinger approach and in holography, yet we know that in the exact limit the condensate must be linear, thus both classes of models are concluded to be deficient for establishing the correct condensate behaviour in the

  18. A comparative study of the models dealing with localized and semi-localized transitions in thermally stimulated luminescence

    International Nuclear Information System (INIS)

    Kumar, Munish; Kher, R K; Bhatt, B C; Sunta, C M

    2007-01-01

    Different models dealing with localized and semi-localized transitions, namely Chen-Halperin, Mandowski and the model based on the Braunlich-Scharmann (BS) approach are compared. It has been found that for recombination dominant situations (r > 1, the three models differ. This implies that for localized transitions under recombination dominant situations, the Chen-Halperin model is the best representative of the thermally stimulated luminescence (TSL) process. It has also been found that for the TSL glow curves arising from delocalized recombination in Mandowski's semi-localized transitions model, the double peak structure of the TSL glow curve is a function of the radiation dose as well as of the heating rate. Further, the double peak structure of the TSL glow curves arising from delocalized recombination disappears at low doses as well as at higher heating rates. It has also been found that the TSL glow curves arising from delocalized recombination in the semi-localized transitions model based on the BS approach do not exhibit double peak structure as observed in the Mandowski semi-localized transitions model

  19. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  20. Estimating and comparing the clinical and economic impact of paediatric rotavirus vaccination in Turkey using a simple versus an advanced model

    NARCIS (Netherlands)

    Bakir, Mustafa; Standaert, Baudouin; Turel, Ozden; Bilge, Zeynep Ece; Postma, Maarten

    2013-01-01

    Background: The burden of rotavirus disease is high in Turkey, reflecting the large birth cohort (> 1.2 million) and the risk of disease. Modelling can help to assess the potential economic impact of vaccination. We compared the output of an advanced model with a simple model requiring fewer data

  1. Comparative Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. In the comp....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases. The comparative test cases include: ventilation, shading and geometry....

  2. Using ROC curves to compare neural networks and logistic regression for modeling individual noncatastrophic tree mortality

    Science.gov (United States)

    Susan L. King

    2003-01-01

    The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...

  3. Comparative study between 2 methods of mounting models in semiadjustable articulator for orthognathic surgery.

    Science.gov (United States)

    Mayrink, Gabriela; Sawazaki, Renato; Asprino, Luciana; de Moraes, Márcio; Fernandes Moreira, Roger William

    2011-11-01

    Compare the traditional method of mounting dental casts on a semiadjustable articulator and the new method suggested by Wolford and Galiano, 1 analyzing the inclination of maxillary occlusal plane in relation to FHP. Two casts of 10 patients were obtained. One of them was used for mounting of models on a traditional articulator, by using a face bow transfer system and the other one was used to mounting models at Occlusal Plane Indicator platform (OPI), using the SAM articulator. After that, na analysis of the accuracy of mounting models was performed. The angle made by de occlusal plane and FHP on the cephalogram should be equal the angle between the occlusal plane and the upper member of the articulator. The measures were tabulated in Microsoft Excell(®) and calculated using a 1-way analysis variance. Statistically, the results did not reveal significant differences among the measures. OPI and face bow presents similar results but more studies are needed to verify its accuracy relative to the maxillary cant in OPI or develop new techniques able to solve the disadvantages of each technique. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Parameters estimation of the single and double diode photovoltaic models using a Gauss–Seidel algorithm and analytical method: A comparative study

    International Nuclear Information System (INIS)

    Et-torabi, K.; Nassar-eddine, I.; Obbadi, A.; Errami, Y.; Rmaily, R.; Sahnoun, S.; El fajri, A.; Agunaou, M.

    2017-01-01

    Highlights: • Comparative study of two methods: a Gauss Seidel method and an analytical method. • Five models are implemented to estimate the five parameters for single diode. • Two models are used to estimate the seven parameters for double diode. • The parameters are estimated under changing environmental conditions. • To choose method/model combination more adequate for each PV module technology. - Abstract: In the photovoltaic (PV) panels modeling field, this paper presents a comparative study of two parameter estimation methods: the iterative method called Gauss Seidel, applied on the single diode model, and the analytical method used on the double diode model. These parameter estimation methods are based on the manufacturer's datasheets. They are also tested on three PV modules of different technologies: multicrystalline (kyocera KC200GT), monocrystalline (Shell SQ80), and thin film (Shell ST40). For the iterative method, five existing mathematical models classified from 1 to 5 are used to estimate the parameters of these PV modules under varying environmental conditions. Only two models of them are used for the analytical method. Each model is based on the combination of the photocurrent and the reverse saturation current’s expressions in terms of temperature and irradiance. In addition, the results of the models’ simulation are compared with the experimental data obtained from the PV modules’ datasheets, in order to evaluate the accuracy of the models. The simulation shows that the I-V characteristics obtained are matching to the experimental data. In order to validate the reliability of the two methods, both the Absolute Error (AE) and the Root Mean Square Error (RMSE) were calculated. The results suggest that the analytical method can be very useful for monocrystalline and multicrystalline modules, but for the thin film module, the iterative method is the most suitable.

  5. Comparative Advantage

    DEFF Research Database (Denmark)

    Zhang, Jie; Jensen, Camilla

    2007-01-01

    that are typically explained from the supply-side variables, the comparative advantage of the exporting countries. A simple model is proposed and tested. The results render strong support for the relevance of supply-side factors such as natural endowments, technology, and infrastructure in explaining international...

  6. Comparing single- and dual-process models of memory development.

    Science.gov (United States)

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  7. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  8. Comparing different kinds of words and word-word relations to test an habituation model of priming.

    Science.gov (United States)

    Rieth, Cory A; Huber, David E

    2017-06-01

    Huber and O'Reilly (2003) proposed that neural habituation exists to solve a temporal parsing problem, minimizing blending between one word and the next when words are visually presented in rapid succession. They developed a neural dynamics habituation model, explaining the finding that short duration primes produce positive priming whereas long duration primes produce negative repetition priming. The model contains three layers of processing, including a visual input layer, an orthographic layer, and a lexical-semantic layer. The predicted effect of prime duration depends both on this assumed representational hierarchy and the assumption that synaptic depression underlies habituation. The current study tested these assumptions by comparing different kinds of words (e.g., words versus non-words) and different kinds of word-word relations (e.g., associative versus repetition). For each experiment, the predictions of the original model were compared to an alternative model with different representational assumptions. Experiment 1 confirmed the prediction that non-words and inverted words require longer prime durations to eliminate positive repetition priming (i.e., a slower transition from positive to negative priming). Experiment 2 confirmed the prediction that associative priming increases and then decreases with increasing prime duration, but remains positive even with long duration primes. Experiment 3 replicated the effects of repetition and associative priming using a within-subjects design and combined these effects by examining target words that were expected to repeat (e.g., viewing the target word 'BACK' after the prime phrase 'back to'). These results support the originally assumed representational hierarchy and more generally the role of habituation in temporal parsing and priming. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  10. A comparative study to identify a suitable model of ownership for Iran football pro league clubs

    Directory of Open Access Journals (Sweden)

    Saeed Amirnejad

    2018-01-01

    Full Text Available Today the government ownership of the professional football clubs is absolutely illogical view point. Most of sports clubs are conducted by private sector using different models of ownership all over the world. In Iran, government credits benefit was main reason that the professional sport was firstly developed by government firms and organizations. Therefore, the sports team ownership is without the professionalization standards. The present comparative study was to examine the different football club ownership structures of the top leagues and the current condition of Iran football pro league ownership and then present a suitable ownership structure of Iran football clubs to leave behind the government club ownership. Among the initial 120 scientific texts, the thirty two cases including papers, books and reports were found relevant to this study. We studied the ownership prominence and several football club models of ownership focused on stock listing model of ownership, private investor model of ownership, supporter trust model of ownership and Japan partnership model of ownership; theoretical concepts, empirical studies, main findings, strengths and weaknesses were covered in analysis procedure. According to various models of ownership in leagues and the models’ productivity in football clubs, each model of ownership considering national environmental, economic, social conditions has strengths and weaknesses. So, we cannot present a definite model of ownership for Iran football pro league clubs due to different micro-environments of Iran clubs. We need a big planning to provide a supporter-investor mixed model of ownership to Iranian clubs. Considering strengths and weaknesses in the models of ownership as well as the micro and macro environment of Iran football clubs, German model and Japan partnership model are offered as suitable ones to probable new model of ownership in Iran pro league clubs. Consequently, more studies are required

  11. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  12. Comparative analysis of elements and models of implementation in local-level spatial plans in Serbia

    Directory of Open Access Journals (Sweden)

    Stefanović Nebojša

    2017-01-01

    Full Text Available Implementation of local-level spatial plans is of paramount importance to the development of the local community. This paper aims to demonstrate the importance of and offer further directions for research into the implementation of spatial plans by presenting the results of a study on models of implementation. The paper describes the basic theoretical postulates of a model for implementing spatial plans. A comparative analysis of the application of elements and models of implementation of plans in practice was conducted based on the spatial plans for the local municipalities of Arilje, Lazarevac and Sremska Mitrovica. The analysis includes four models of implementation: the strategy and policy of spatial development; spatial protection; the implementation of planning solutions of a technical nature; and the implementation of rules of use, arrangement and construction of spaces. The main results of the analysis are presented and used to give recommendations for improving the elements and models of implementation. Final deliberations show that models of implementation are generally used in practice and combined in spatial plans. Based on the analysis of how models of implementation are applied in practice, a general conclusion concerning the complex character of the local level of planning is presented and elaborated. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR 36035: Spatial, Environmental, Energy and Social Aspects of Developing Settlements and Climate Change - Mutual Impacts and Grant no. III 47014: The Role and Implementation of the National Spatial Plan and Regional Development Documents in Renewal of Strategic Research, Thinking and Governance in Serbia

  13. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis.

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    Full Text Available Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models.Two types of dental models (intraoral scan and plaster model of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model.There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm.The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models.

  14. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis

    Science.gov (United States)

    2016-01-01

    Purpose Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models. Materials and Methods Two types of dental models (intraoral scan and plaster model) of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model. Results There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm. Conclusions The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models. PMID:27304976

  15. Comparative studies of atomic independent-particle potentials

    International Nuclear Information System (INIS)

    Talman, J.D.; Ganas, P.S.; Green, A.E.S.

    1979-01-01

    A number of atomic properties are compared in various independent-particle models for atoms. The models studied are the Hartree-Fock method, a variationally optimized potential model, a parametrized analytic form of the same model, parametrized analytic models constructed to fit atomic energy levels, the so-called Hartree-Fock-Slater model, and the Xα model. The physical properties compared are single-particle energy levels, total energies, and dipole polarizabilities. The extent to which the virial theorem is satisfied in the different models is also considered. The atoms Be, Ne, Ar, Kr, and Xe and ions O v and Al iv hav been compared. The results show that the experimental properties can be well represented by several of the independent-particle models. Since it has been shown that the optimized potential models yield wavefunctions that are almost the same as Hartree-Fock wavefunctions, they provide a natural solution to the problem of extending the Hartree-Fock method to excited states

  16. Simultaneous Feedback Models with Macro-Comparative Cross-Sectional Data

    Directory of Open Access Journals (Sweden)

    Nate Breznau

    2018-06-01

    Full Text Available Social scientists often work with theories of reciprocal causality. Sometimes theories suggest that reciprocal causes work simultaneously, or work on a time-scale small enough to make them appear simultaneous. Researchers may employ simultaneous feedback models to investigate such theories, although the practice is rare in cross-sectional survey research. This paper discusses the certain conditions that make these models possible if not desirable using such data. This methodological excursus covers the construction of simultaneous feedback models using a structural equation modeling perspective. This allows the researcher to test if a simultaneous feedback theory fits survey data, test competing hypotheses and engage in macro-comparisons. This paper presents methods in a manner and language amenable to the practicing social scientist who is not a statistician or matrix mathematician. It demonstrates how to run models using three popular software programs (MPlus, Stata and R, and an empirical example using International Social Survey Program data.

  17. Atmospheric Dispersion Models for the Calculation of Environmental Impact: A Comparative Study

    International Nuclear Information System (INIS)

    Caputo, Marcelo; Gimenez, Marcelo; Felicelli, Sergio; Schlamp, Miguel

    2000-01-01

    In this paper some new comparisons are presented between the codes AERMOD, HPDM and HYSPLIT.The first two are Gaussian stationary plume codes and they were developed to calculate environmental impact produced by chemical contaminants.HYSPLIT is a hybrid code because it uses a Lagrangian reference system to describe the transport of a puff center of mass and uses an Eulerian system to describe the dispersion within the puff.The meteorological and topographic data used in the present work were obtained from runs of the prognostic code RAMS, provided by NOAA. The emission was fixed in 0.3 g/s , 284 K and 0 m/s .The surface rough was fixed in 0.1m and flat terrain was considered.In order to analyze separate effects and to go deeper in the comparison, the meteorological data was split into two, depending on the atmospheric stability class (F to B), and the wind direction was fixed to neglect its contribution to the contaminant dispersion.The main contribution of this work is to provide recommendations about the validity range of each code depending on the model used.In the case of Gaussian models the validity range is fixed by the distance in which the atmospheric condition can be consider homogeneous.In the other hand the validity range of HYSPLIT's model is determined by the spatial extension of the meteorological data.The results obtained with the three codes are comparable if the emission is in equilibrium with the environment.This means that the gases were emitted at the same temperature of the medium with zero velocity.There was an important difference between the dispersion parameters used by the Gaussian codes

  18. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  19. Comparative in vitro and in vivo models of cytotoxicity and genotoxicity

    International Nuclear Information System (INIS)

    Brooks, A.L.; Mitchell, C.E.; Seiler, S.A.

    1986-01-01

    To understand the development of disease from inhalation of complex chemical mixtures, it is necessary to use both in vitro and whole animal systems. This project is designed to provide links between these two types of research. The project has three major goals. The first goal is to evaluate the mutagenic activity of complex mixtures and the interactions between different fractions in these mixtures. The second is to develop model cellular systems that help define the mechanisms of genotoxic damage and repair in the lung. The third goal is to understand the mechanisms involved in the induction of mutations and chromosome aberrations in mammalian cells. Research on the measurement and interactions of mutagens in complex mixtures is illustrated by reporting a study using diesel exhaust particle extracts. The extracts were fractionated into ten different chemical classes. Each of the fractions was tested for mutagenic activity in the Ames Salmonella mutation assay. Individual fractions were combined using different permutations. The total mixture was reconstituted and the mutagenic activity compared to the predicted level of activity. Mutagenic activity was additive indicating that the chemical fractionation did not alter the extracts and that there was little evidence of synergistic or antagonistic interaction. To help define the mechanisms involved in the induction of mutations, they have exposed CHO cells to radiation and mutagenic chemicals alone and in combination. In these studies, they have demonstrated that when cells were exposed to 500 rad of x-rays followed by either direct or indirect acting mutagens frequency was less than would be predicted by an additive model

  20. Verification of a computational cardiovascular system model comparing the hemodynamics of a continuous flow to a synchronous valveless pulsatile flow left ventricular assist device.

    Science.gov (United States)

    Gohean, Jeffrey R; George, Mitchell J; Pate, Thomas D; Kurusz, Mark; Longoria, Raul G; Smalling, Richard W

    2013-01-01

    The purpose of this investigation is to use a computational model to compare a synchronized valveless pulsatile left ventricular assist device with continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate the support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous pulsatile valveless dual-piston positive displacement pump. These results were compared with measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared with the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device.

  1. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model

    International Nuclear Information System (INIS)

    Sun Changjin; Li Chao; Lv Haibo

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3–127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2–53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7–124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2–62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14–47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice. (author)

  2. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.

    Science.gov (United States)

    Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.

  3. Comparative immunological evaluation of recombinant Salmonella Typhimurium strains expressing model antigens as live oral vaccines.

    Science.gov (United States)

    Zheng, Song-yue; Yu, Bin; Zhang, Ke; Chen, Min; Hua, Yan-Hong; Yuan, Shuofeng; Watt, Rory M; Zheng, Bo-Jian; Yuen, Kwok-Yung; Huang, Jian-Dong

    2012-09-26

    Despite the development of various systems to generate live recombinant Salmonella Typhimurium vaccine strains, little work has been performed to systematically evaluate and compare their relative immunogenicity. Such information would provide invaluable guidance for the future rational design of live recombinant Salmonella oral vaccines. To compare vaccine strains encoded with different antigen delivery and expression strategies, a series of recombinant Salmonella Typhimurium strains were constructed that expressed either the enhanced green fluorescent protein (EGFP) or a fragment of the hemagglutinin (HA) protein from the H5N1 influenza virus, as model antigens. The antigens were expressed from the chromosome, from high or low-copy plasmids, or encoded on a eukaryotic expression plasmid. Antigens were targeted for expression in either the cytoplasm or the outer membrane. Combinations of strategies were employed to evaluate the efficacy of combined delivery/expression approaches. After investigating in vitro and in vivo antigen expression, growth and infection abilities; the immunogenicity of the constructed recombinant Salmonella strains was evaluated in mice. Using the soluble model antigen EGFP, our results indicated that vaccine strains with high and stable antigen expression exhibited high B cell responses, whilst eukaryotic expression or colonization with good construct stability was critical for T cell responses. For the insoluble model antigen HA, an outer membrane expression strategy induced better B cell and T cell responses than a cytoplasmic strategy. Most notably, the combination of two different expression strategies did not increase the immune response elicited. Through systematically evaluating and comparing the immunogenicity of the constructed recombinant Salmonella strains in mice, we identified their respective advantages and deleterious or synergistic effects. Different construction strategies were optimally-required for soluble versus

  4. Comparing offshore wind farm wake observed from satellite SAR and wake model results

    Science.gov (United States)

    Bay Hasager, Charlotte

    2014-05-01

    Offshore winds can be observed from satellite synthetic aperture radar (SAR). In the FP7 EERA DTOC project, the European Energy Research Alliance project on Design Tools for Offshore Wind Farm Clusters, there is focus on mid- to far-field wind farm wakes. The more wind farms are constructed nearby other wind farms, the more is the potential loss in annual energy production in all neighboring wind farms due to wind farm cluster effects. It is of course dependent upon the prevailing wind directions and wind speed levels, the distance between the wind farms, the wind turbine sizes and spacing. Some knowledge is available within wind farm arrays and in the near-field from various investigations. There are 58 offshore wind farms in the Northern European seas grid connected and in operation. Several of those are spaced near each other. There are several twin wind farms in operation including Nysted-1 and Rødsand-2 in the Baltic Sea, and Horns Rev 1 and Horns Rev 2, Egmond aan Zee and Prinses Amalia, and Thompton 1 and Thompton 2 all in the North Sea. There are ambitious plans of constructing numerous wind farms - great clusters of offshore wind farms. Current investigation of offshore wind farms includes mapping from high-resolution satellite SAR of several of the offshore wind farms in operation in the North Sea. Around 20 images with wind farm wake cases have been retrieved and processed. The data are from the Canadian RADARSAT-1/-2 satellites. These observe in microwave C-band and have been used for ocean surface wind retrieval during several years. The satellite wind maps are valid at 10 m above sea level. The wakes are identified in the raw images as darker areas downwind of the wind farms. In the SAR-based wind maps the wake deficit is found as areas of lower winds downwind of the wind farms compared to parallel undisturbed flow in the flow direction. The wind direction is clearly visible from lee effects and wind streaks in the images. The wind farm wake cases

  5. Comparative Study between Capital Asset Pricing Model and Arbitrage Pricing Theory in Indonesian Capital Market during Period 2008-2012

    Directory of Open Access Journals (Sweden)

    Leo Julianto

    2015-09-01

    Full Text Available For decades, there were many models explaining the returns earned emerged in order to fulfil the curiosity had by human. Since then, various studies and empirical findings in many countries’ stock market showedthat the empirical findings of market return explanation and the return of assets meet the different results in both clarify of model and identification of significant determinant variables.Therefore, many comparative studies between models were accomplished. In this study, the author attempts to do comparative study between two models, APT and CAPM, in Indonesian Capital Market during period 2008 until 2012.  Besides, the author also attempts to find how much inflation, interest rate, and exchange rate describe the returns earned in each sector existed in Indonesia Capital Market. As the result, the author find out that CAPM has bigger explanation power than APT in Indonesian Capital Market during period 2008-2012. Besides, the author also found that among macroeconomic factors, there are only two macroeconomic factors that can affect certain samples significantly.  They are change in BI rate, which affect AALI, ANTM, ASII, TLKM, UNTR, and change in exchange rate, which affect INDF and TLKM significantly.

  6. A Comparative Study on Satellite- and Model-Based Crop Phenology in West Africa

    Directory of Open Access Journals (Sweden)

    Elodie Vintrou

    2014-02-01

    Full Text Available Crop phenology is essential for evaluating crop production in the food insecure regions of West Africa. The aim of the paper is to study whether satellite observation of plant phenology are consistent with ground knowledge of crop cycles as expressed in agro-simulations. We used phenological variables from a MODIS Land Cover Dynamics (MCD12Q2 product and examined whether they reproduced the spatio-temporal variability of crop phenological stages in Southern Mali. Furthermore, a validated cereal crop growth model for this region, SARRA-H (System for Regional Analysis of Agro-Climatic Risks, provided precise agronomic information. Remotely-sensed green-up, maturity, senescence and dormancy MODIS dates were extracted for areas previously identified as crops and were compared with simulated leaf area indices (LAI temporal profiles generated using the SARRA-H crop model, which considered the main cropping practices. We studied both spatial (eight sites throughout South Mali during 2007 and temporal (two sites from 2002 to 2008 differences between simulated crop cycles and determined how the differences were indicated in satellite-derived phenometrics. The spatial comparison of the phenological indicator observations and simulations showed mainly that (i the satellite-derived start-of-season (SOS was detected approximately 30 days before the model-derived SOS; and (ii the satellite-derived end-of-season (EOS was typically detected 40 days after the model-derived EOS. Studying the inter-annual difference, we verified that the mean bias was globally consistent for different climatic conditions. Therefore, the land cover dynamics derived from the MODIS time series can reproduce the spatial and temporal variability of different start-of-season and end-of-season crop species. In particular, we recommend simultaneously using start-of-season phenometrics with crop models for yield forecasting to complement commonly used climate data and provide a better

  7. Underwater floating robot-fish: a comparative analysis of the results of mathematical modelling and full-scale tests of the prototype

    Directory of Open Access Journals (Sweden)

    Jatsun Sergey

    2017-01-01

    Full Text Available The article presents a comparative analysis of the results of computer mathematical modelling of the motion of the underwater robot-fish implemented by using the MATLAB / Simulink package and fullscale tests of an experimental model developed in the laboratory of mechatronics and robotics of the SouthWest State University.

  8. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  9. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    Science.gov (United States)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)1232.0.CO;2.

  10. Comparative study of two models of combined pulmonary fibrosis and emphysema in mice.

    Science.gov (United States)

    Zhang, Wan-Guang; Wu, Si-Si; He, Li; Yang, Qun; Feng, Yi-Kuan; Chen, Yue-Tao; Zhen, Guo-Hua; Xu, Yong-Jian; Zhang, Zhen-Xiang; Zhao, Jian-Ping; Zhang, Hui-Lan

    2017-04-01

    Combined pulmonary fibrosis and emphysema (CPFE) is an "umbrella term" encompassing emphysema and pulmonary fibrosis, but its pathogenesis is not known. We established two models of CPFE in mice using tracheal instillation with bleomycin (BLM) or murine gammaherpesvirus 68 (MHV-68). Experimental mice were divided randomly into four groups: A (normal control, n=6), B (emphysema, n=6), C (emphysema+MHV-68, n=24), D (emphysema+BLM, n=6). Group C was subdivided into four groups: C1 (sacrificed on day 367, 7 days after tracheal instillation of MHV-68); C2 (day 374; 14days); C3 (day 381; 21days); C4 (day 388; 28days). Conspicuous emphysema and interstitial fibrosis were observed in BLM and MHV-68 CPFE mouse models. However, BLM induced diffuse pulmonary interstitial fibrosis with severely diffuse pulmonary inflammation; MHV-68 induced relatively modest inflammation and fibrosis, and the inflammation and fibrosis were not diffuse, but instead around bronchioles. Inflammation and fibrosis were detectable in the day-7 subgroup and reached a peak in the day-28 subgroup in the emphysema + MHV-68 group. Levels of macrophage chemoattractant protein-1, macrophage inflammatory protein-1α, interleukin-13, and transforming growth factor-β1 in bronchoalveolar lavage fluid were increased significantly in both models. Percentage of apoptotic type-2 lung epithelial cells was significantly higher; however, all four types of cytokine and number of macrophages were significantly lower in the emphysema+MHV-68 group compared with the emphysema +BLM group. The different changes in pathology between BLM and MHV-68 mice models demonstrated different pathology subtypes of CPFE: macrophage infiltration and apoptosis of type-II lung epithelial cells increased with increasing pathology score for pulmonary fibrosis. Copyright © 2017 Elsevier GmbH. All rights reserved.

  11. How do farm models compare when estimating greenhouse gas emissions from dairy cattle production?

    DEFF Research Database (Denmark)

    Hutchings, Nicholas John; Özkan, Şeyda; de Haan, M

    2018-01-01

    The European Union Effort Sharing Regulation (ESR) will require a 30% reduction in greenhouse gas (GHG) emissions by 2030 compared with 2005 from the sectors not included in the European Emissions Trading Scheme, including agriculture. This will require the estimation of current and future...... from four farm-scale models (DairyWise, FarmAC, HolosNor and SFARMMOD) were calculated for eight dairy farming scenarios within a factorial design consisting of two climates (cool/dry and warm/wet)×two soil types (sandy and clayey)×two feeding systems (grass only and grass/maize). The milk yield per...

  12. A comparative Thermal Analysis of conventional parabolic receiver tube and Cavity model tube in a Solar Parabolic Concentrator

    Science.gov (United States)

    Arumugam, S.; Ramakrishna, P.; Sangavi, S.

    2018-02-01

    Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.

  13. COMPARING 3D FOOT SHAPE MODELS BETWEEN TAIWANESE AND JAPANESE FEMALES.

    Science.gov (United States)

    Lee, Yu-Chi; Kouchi, Makiko; Mochimaru, Masaaki; Wang, Mao-Jiun

    2015-06-01

    This study compares foot shape and foot dimensions between Taiwanese and Japanese females. One hundred Taiwanese and 100 Japanese female 3D foot scanning data were used for comparison. To avoid the allometry effect, data from 23 Taiwanese and 19 Japanese with foot length between 233 to 237 mm were used for shape comparison. Homologous models created for the right feet of the 42 subjects were analyzed by Multidimensional Scaling. The results showed that there were significant differences in the forefoot shape between the two groups, and Taiwanese females had slightly wider feet with straighter big toe than Japanese females. The results of body and foot dimension comparison indicated that Taiwanese females were taller, heavier and had larger feet than Japanese females, while Japanese females had significantly larger toe 1 angle. Since some Taiwanese shoemakers adopt the Japanese shoe sizing system for making shoes, appropriateness of the shoe sizing system was also discussed. The present results provide very useful information for improving shoe last design and footwear fit for Taiwanese females.

  14. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  15. Comparative and Evolutionary Analysis of Grass Pollen Allergens Using Brachypodium distachyon as a Model System.

    Directory of Open Access Journals (Sweden)

    Akanksha Sharma

    Full Text Available Comparative genomics have facilitated the mining of biological information from a genome sequence, through the detection of similarities and differences with genomes of closely or more distantly related species. By using such comparative approaches, knowledge can be transferred from the model to non-model organisms and insights can be gained in the structural and evolutionary patterns of specific genes. In the absence of sequenced genomes for allergenic grasses, this study was aimed at understanding the structure, organisation and expression profiles of grass pollen allergens using the genomic data from Brachypodium distachyon as it is phylogenetically related to the allergenic grasses. Combining genomic data with the anther RNA-Seq dataset revealed 24 pollen allergen genes belonging to eight allergen groups mapping on the five chromosomes in B. distachyon. High levels of anther-specific expression profiles were observed for the 24 identified putative allergen-encoding genes in Brachypodium. The genomic evidence suggests that gene encoding the group 5 allergen, the most potent trigger of hay fever and allergic asthma originated as a pollen specific orphan gene in a common grass ancestor of Brachypodium and Triticiae clades. Gene structure analysis showed that the putative allergen-encoding genes in Brachypodium either lack or contain reduced number of introns. Promoter analysis of the identified Brachypodium genes revealed the presence of specific cis-regulatory sequences likely responsible for high anther/pollen-specific expression. With the identification of putative allergen-encoding genes in Brachypodium, this study has also described some important plant gene families (e.g. expansin superfamily, EF-Hand family, profilins etc for the first time in the model plant Brachypodium. Altogether, the present study provides new insights into structural characterization and evolution of pollen allergens and will further serve as a base for their

  16. Comparative Reannotation of 21 Aspergillus Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Salamov, Asaf; Riley, Robert; Kuo, Alan; Grigoriev, Igor

    2013-03-08

    We used comparative gene modeling to reannotate 21 Aspergillus genomes. Initial automatic annotation of individual genomes may contain some errors of different nature, e.g. missing genes, incorrect exon-intron structures, 'chimeras', which fuse 2 or more real genes or alternatively splitting some real genes into 2 or more models. The main premise behind the comparative modeling approach is that for closely related genomes most orthologous families have the same conserved gene structure. The algorithm maps all gene models predicted in each individual Aspergillus genome to the other genomes and, for each locus, selects from potentially many competing models, the one which most closely resembles the orthologous genes from other genomes. This procedure is iterated until no further change in gene models is observed. For Aspergillus genomes we predicted in total 4503 new gene models ( ~;;2percent per genome), supported by comparative analysis, additionally correcting ~;;18percent of old gene models. This resulted in a total of 4065 more genes with annotated PFAM domains (~;;3percent increase per genome). Analysis of a few genomes with EST/transcriptomics data shows that the new annotation sets also have a higher number of EST-supported splice sites at exon-intron boundaries.

  17. Comparing Political Communication

    OpenAIRE

    Pfetsch, Barbara; Esser, Frank

    2012-01-01

    This chapter describes the maturation of comparative political communications as a sub-discipline and defines its conceptual core. It then lays out the concept of “political communication system”. At the macro-level, this model captures the patterns of interaction between media and politics as social systems; at the micro-level it captures the interactions between media and political actors as individuals or organizations. Comparative research in this tradition focuses on the structure of pol...

  18. Multistage cancer models of bone cancer induction in beagles and mice by radium and plutonium, compared to humans

    Energy Technology Data Exchange (ETDEWEB)

    Bijwaard, H.; Brugmans, M. [RIVM-National Inst. for Public Health and the Environment, Lab. for Radiation Research, MA Bilthoven (Netherlands)

    2005-07-01

    Two-mutation carcinogenesis models of mice injected with Pu-239 and Ra-226 have been derived as an extension of previous modellings of beagle dogs injected with Pu-239 and Ra-226 and dial painters that ingested radium. In all cases the data could be fitted adequately using no more than five free model parameters. Apart from three parameters for the background, these include two dose-related parameters: a linear mutation coefficient that is equal in both mutational steps and a usually non-zero cell-killing coefficient in the second mutational step. After a simple scaling the animal models compare reasonably well with each other and with the model for the radium dial painters. From the toxicity ratio of beagle models for Pu-239 and Ra-226, together with the human model for Ra-226, an approximate model for the exposure of humans to Pu-239 has been constructed. Relative risk calculations with this approximate model are in good agreement with epidemiological findings for the plutonium-exposed Mayak workers. This promising result may indicate new possibilities for estimating risks for humans from animal experiments. (orig.)

  19. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    Science.gov (United States)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  20. Comparative study of Moore and Mealy machine models adaptation ...

    African Journals Online (AJOL)

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  1. COMPARATIVE ANALYSIS BETWEEN THE TRADITIONAL MODEL OF CORPORATE GOVERNANCE AND ISLAMIC MODEL

    Directory of Open Access Journals (Sweden)

    DAN ROXANA LOREDANA

    2016-08-01

    Full Text Available Corporate governance represents a set of processes and policies by which a company is administered, controlled and directed to achieve the predetermined management objectives settled by the shareholders. The most important benefits of the corporate governance to the organisations are related to business success, investor confidence and minimisation of wastage. For business, the improved controls and decision-making will aid corporate success as well as growth in revenues and profits. For the investor confidence, corporate governance will mean that investors are more likely to trust that the company is being well run. This will not only make it easier and cheaper for the company to raise finance, but also has a positive effect on the share price. When we talk about the minimisation of wastage we relate to the strong corporate governance that should help to minimise waste within the organisation, as well as the corruption, risks and mismanagement. Thus, in our research, we are trying to determine the common elements, and also, the differences that have occured between two well known models of corporate governance, the traditional Anglo – Saxon model and also, the Islamic model of corporate governance.

  2. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  3. Comparing stream-specific to generalized temperature models to guide salmonid management in a changing climate

    Science.gov (United States)

    Andrew K. Carlson,; William W. Taylor,; Hartikainen, Kelsey M.; Dana M. Infante,; Beard, Douglas; Lynch, Abigail

    2017-01-01

    Global climate change is predicted to increase air and stream temperatures and alter thermal habitat suitability for growth and survival of coldwater fishes, including brook charr (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss). In a changing climate, accurate stream temperature modeling is increasingly important for sustainable salmonid management throughout the world. However, finite resource availability (e.g. funding, personnel) drives a tradeoff between thermal model accuracy and efficiency (i.e. cost-effective applicability at management-relevant spatial extents). Using different projected climate change scenarios, we compared the accuracy and efficiency of stream-specific and generalized (i.e. region-specific) temperature models for coldwater salmonids within and outside the State of Michigan, USA, a region with long-term stream temperature data and productive coldwater fisheries. Projected stream temperature warming between 2016 and 2056 ranged from 0.1 to 3.8 °C in groundwater-dominated streams and 0.2–6.8 °C in surface-runoff dominated systems in the State of Michigan. Despite their generally lower accuracy in predicting exact stream temperatures, generalized models accurately projected salmonid thermal habitat suitability in 82% of groundwater-dominated streams, including those with brook charr (80% accuracy), brown trout (89% accuracy), and rainbow trout (75% accuracy). In contrast, generalized models predicted thermal habitat suitability in runoff-dominated streams with much lower accuracy (54%). These results suggest that, amidst climate change and constraints in resource availability, generalized models are appropriate to forecast thermal conditions in groundwater-dominated streams within and outside Michigan and inform regional-level salmonid management strategies that are practical for coldwater fisheries managers, policy makers, and the public. We recommend fisheries professionals reserve resource

  4. Study protocol for a comparative effectiveness trial of two models of perinatal integrated psychosocial assessment: the PIPA project.

    Science.gov (United States)

    Reilly, Nicole; Black, Emma; Chambers, Georgina M; Schmied, Virginia; Matthey, Stephen; Farrell, Josephine; Kingston, Dawn; Bisits, Andrew; Austin, Marie-Paule

    2017-07-20

    Studies examining psychosocial and depression assessment programs in maternity settings have not adequately considered the context in which psychosocial assessment occurs or how broader components of integrated care, including clinician decision-making aids, may optimise program delivery and its cost-effectiveness. There is also limited evidence relating to the diagnostic accuracy of symptom-based screening measures used in this context. The Perinatal Integrated Psychosocial Assessment (PIPA) Project was developed to address these knowledge gaps. The primary aims of the PIPA Project are to examine the clinical- and cost-effectiveness of two alternative models of integrated psychosocial care during pregnancy: 'care as usual' (the SAFE START model) and an alternative model (the PIPA model). The acceptability and perceived benefit of each model of care from the perspective of both pregnant women and their healthcare providers will also be assessed. Our secondary aim is to examine the psychometric properties of a number of symptom-based screening tools for depression and anxiety when used in pregnancy. This is a comparative-effectiveness study comparing 'care as usual' to an alternative model sequentially over two 12-month periods. Data will be collected from women at Time 1 (initial antenatal psychosocial assessment), Time 2 (2-weeks after Time 1) and from clinicians at Time 3 for each condition. Primary aims will be evaluated using a between-groups design, and the secondary aim using a within group design. The PIPA Project will provide evidence relating to the clinical- and cost- effectiveness of psychosocial assessment integrated with electronic clinician decision making prompts, and referral options that are tailored to the woman's psychosocial risk, in the maternity care setting. It will also address research recommendations from the Australian (2011) and NICE (2015) Clinical Practice Guidelines. ACTRN12617000932369.

  5. Comparing tree foliage biomass models fitted to a multispecies, felled-tree biomass dataset for the United States

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Philip J. Radtke

    2016-01-01

    tEstimation of live tree biomass is an important task for both forest carbon accounting and studies of nutri-ent dynamics in forest ecosystems. In this study, we took advantage of an extensive felled-tree database(with 2885 foliage biomass observations) to compare different models and grouping schemes based onphylogenetic and geographic variation for predicting foliage...

  6. A Comparative Study of Early Afterdepolarization-Mediated Fibrillation in Two Mathematical Models for Human Ventricular Cells.

    Directory of Open Access Journals (Sweden)

    Soling Zimik

    Full Text Available Early afterdepolarizations (EADs, which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1 a normal action potential (AP with no EADs, (2 an AP with EADs, and (3 an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two- and three-dimensional domains, in the presence of EADs, we find the following wave types: (A waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves; (B waves driven only by the L-type calcium current (Ca-mediated waves; (C phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model.

  7. A Comparative Study of Early Afterdepolarization-Mediated Fibrillation in Two Mathematical Models for Human Ventricular Cells

    Science.gov (United States)

    Zimik, Soling; Vandersickel, Nele; Nayak, Alok Ranjan; Panfilov, Alexander V.; Pandit, Rahul

    2015-01-01

    Early afterdepolarizations (EADs), which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1) a normal action potential (AP) with no EADs, (2) an AP with EADs, and (3) an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two- and three-dimensional domains, in the presence of EADs, we find the following wave types: (A) waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves); (B) waves driven only by the L-type calcium current (Ca-mediated waves); (C) phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves) in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model. PMID:26125185

  8. Food hygiene practices and its associated factors among model and non model households in Abobo district, southwestern Ethiopia: Comparative cross-sectional study.

    Science.gov (United States)

    Okugn, Akoma; Woldeyohannes, Demelash

    2018-01-01

    In developing country most of human infectious diseases are caused by eating contaminated food. Estimated nine out ten of the diarrheal disease is attributable to the environment and associated with risk factors of poor food hygiene practice. Understanding the risk of eating unsafe food is the major concern to prevent and control food borne diseases. The main goal of this study was to assessing food hygiene practices and its associated factors among model and non model households at Abobo district. This study was conducted from 18 October 2013 to 13 June 2014. A community-based comparative cross-sectional study design was used. Pretested structured questionnaire was used to collect data. A total of 1247 households (417 model and 830 non model households) were included in the study from Abobo district. Bivariate and multivariate logistic regression analysis was used to identify factors associated with outcome variable. The study revealed that good food hygiene practice was 51%, of which 79% were model and 36.70% were non model households. Type of household [AOR: 2.07, 95% CI: (1.32-3.39)], sex of household head [AOR: 1.63, 95% CI: (1.06-2.48)], Availability of liquid wastes disposal pit [AOR: 2.23, 95% CI: (1.39,3.63)], Knowledge of liquid waste to cause diseases [AOR: 1.95, 95% (1.23,3.08)], and availability of functional hand washing facility [AOR: 3.61, 95% CI: (1.86-7.02)] were the factors associated with food handling practices. This study revealed that good food handling practice is low among model and non model households. While type of household (model versus non model households), sex, knowledge of solid waste to cause diseases, availability of functional hand washing facility, and availability of liquid wastes disposal pit were the factors associated with outcome variable. Health extension workers should play a great role in educating households regarding food hygiene practices to improve their knowledge and practices of the food hygiene.

  9. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Comparative economic evaluation of data from the ACRIN National CT Colonography Trial with three cancer intervention and surveillance modeling network microsimulations.

    Science.gov (United States)

    Vanness, David J; Knudsen, Amy B; Lansdorp-Vogelaar, Iris; Rutter, Carolyn M; Gareen, Ilana F; Herman, Benjamin A; Kuntz, Karen M; Zauber, Ann G; van Ballegooijen, Marjolein; Feuer, Eric J; Chen, Mei-Hsiu; Johnson, C Daniel

    2011-11-01

    To estimate the cost-effectiveness of computed tomographic (CT) colonography for colorectal cancer (CRC) screening in average-risk asymptomatic subjects in the United States aged 50 years. Enrollees in the American College of Radiology Imaging Network National CT Colonography Trial provided informed consent, and approval was obtained from the institutional review board at each site. CT colonography performance estimates from the trial were incorporated into three Cancer Intervention and Surveillance Modeling Network CRC microsimulations. Simulated survival and lifetime costs for screening 50-year-old subjects in the United States with CT colonography every 5 or 10 years were compared with those for guideline-concordant screening with colonoscopy, flexible sigmoidoscopy plus either sensitive unrehydrated fecal occult blood testing (FOBT) or fecal immunochemical testing (FIT), and no screening. Perfect and reduced screening adherence scenarios were considered. Incremental cost-effectiveness and net health benefits were estimated from the U.S. health care sector perspective, assuming a 3% discount rate. CT colonography at 5- and 10-year screening intervals was more costly and less effective than FOBT plus flexible sigmoidoscopy in all three models in both 100% and 50% adherence scenarios. Colonoscopy also was more costly and less effective than FOBT plus flexible sigmoidoscopy, except in the CRC-SPIN model assuming 100% adherence (incremental cost-effectiveness ratio: $26,300 per life-year gained). CT colonography at 5- and 10-year screening intervals and colonoscopy were net beneficial compared with no screening in all model scenarios. The 5-year screening interval was net beneficial over the 10-year interval except in the MISCAN model when assuming 100% adherence and willingness to pay $50,000 per life-year gained. All three models predict CT colonography to be more costly and less effective than non-CT colonographic screening but net beneficial compared with no

  11. A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model

    Science.gov (United States)

    Riette, Sébastien; Lac, Christine

    2016-08-01

    In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.

  12. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  13. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    OpenAIRE

    Roslyn De Braine; Gert Roodt

    2011-01-01

    Orientation: Research shows that engaged employees experience high levels of energy and strong identification with their work, hence this study’s focus on work identity and dedication. Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R) as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component o...

  14. A comparative analysis of reactor lower head debris cooling models employed in the existing severe accident analysis codes

    International Nuclear Information System (INIS)

    Ahn, K.I.; Kim, D.H.; Kim, S.B.; Kim, H.D.

    1998-08-01

    MELCOR and MAAP4 are the representative severe accident analysis codes which have been developed for the integral analysis of the phenomenological reactor lower head corium cooling behavior. Main objectives of the present study is to identify merits and disadvantages of each relevant model through the comparative analysis of the lower plenum corium cooling models employed in these two codes. The final results will be utilized for the development of LILAC phenomenological models and for the continuous improvement of the existing MELCOR reactor lower head models, which are currently being performed at the KAERI. For these purposes, first, nine reference models are selected featuring the lower head corium behavior based on the existing experimental evidences and related models. Then main features of the selected models have been critically analyzed, and finally merits and disadvantages of each corresponding model have been summarized in the view point of realistic corium behavior and reasonable modeling. Being on these evidences, summarized and presented the potential improvements for developing more advanced models. The present study has been focused on the qualitative comparison of each model and so more detailed quantitative analysis is strongly required to obtain the final conclusions for their merits and disadvantages. In addition, in order to compensate the limitations of the current model, required further studies relating closely the detailed mechanistic models with the molten material movement and heat transfer based on phase-change in the porous medium, to the existing simple models. (author). 36 refs

  15. The Prediction of Consumer Buying Intentions: A Comparative Study of the Predictive Efficacy of Two Attitudinal Models. Faculty Working Paper No. 234.

    Science.gov (United States)

    Bhagat, Rabi S.; And Others

    The role of attitudes in the conduct of buyer behavior is examined in the context of two competitive models of attitude structure and attitude-behavior relationship. Specifically, the objectives of the study were to compare the Fishbein and Sheth models on the criteria of predictive as well as cross validities. Data on both the models were…

  16. Comparative Study of Lectin Domains in Model Species: New Insights into Evolutionary Dynamics

    Directory of Open Access Journals (Sweden)

    Sofie Van Holle

    2017-05-01

    Full Text Available Lectins are present throughout the plant kingdom and are reported to be involved in diverse biological processes. In this study, we provide a comparative analysis of the lectin families from model species in a phylogenetic framework. The analysis focuses on the different plant lectin domains identified in five representative core angiosperm genomes (Arabidopsis thaliana, Glycine max, Cucumis sativus, Oryza sativa ssp. japonica and Oryza sativa ssp. indica. The genomes were screened for genes encoding lectin domains using a combination of Basic Local Alignment Search Tool (BLAST, hidden Markov models, and InterProScan analysis. Additionally, phylogenetic relationships were investigated by constructing maximum likelihood phylogenetic trees. The results demonstrate that the majority of the lectin families are present in each of the species under study. Domain organization analysis showed that most identified proteins are multi-domain proteins, owing to the modular rearrangement of protein domains during evolution. Most of these multi-domain proteins are widespread, while others display a lineage-specific distribution. Furthermore, the phylogenetic analyses reveal that some lectin families evolved to be similar to the phylogeny of the plant species, while others share a closer evolutionary history based on the corresponding protein domain architecture. Our results yield insights into the evolutionary relationships and functional divergence of plant lectins.

  17. Cowhage-induced itch as an experimental model for pruritus. A comparative study with histamine-induced itch.

    Directory of Open Access Journals (Sweden)

    Alexandru D P Papoiu

    2011-03-01

    Full Text Available Histamine is the prototypical pruritogen used in experimental itch induction. However, in most chronic pruritic diseases, itch is not predominantly mediated by histamine. Cowhage-induced itch, on the other hand, seems more characteristic of itch occurring in chronic pruritic diseases.We tested the validity of cowhage as an itch-inducing agent by contrasting it with the classical itch inducer, histamine, in healthy subjects and atopic dermatitis (AD patients. We also investigated whether there was a cumulative effect when both agents were combined.Fifteen healthy individuals and fifteen AD patients were recruited. Experimental itch induction was performed in eczema-free areas on the volar aspects of the forearm, using different itch inducers: histamine, cowhage and their combination thereof. Itch intensity was assessed continuously for 5.5 minutes after stimulus application using a computer-assisted visual analogue scale (COVAS.In both healthy and AD subjects, the mean and peak intensity of itch were higher after the application of cowhage compared to histamine, and were higher after the combined application of cowhage and histamine, compared to histamine alone (p<0.0001 in all cases. Itch intensity ratings were not significantly different between healthy and AD subjects for the same itch inducer used; however AD subjects exhibited a prolonged itch response in comparison to healthy subjects (p<0.001.Cowhage induced a more intense itch sensation compared to histamine. Cowhage was the dominant factor in itch perception when both pathways were stimulated in the same time. Cowhage-induced itch is a suitable model for the study of itch in AD and other chronic pruritic diseases, and it can serve as a new model for testing antipruritic drugs in humans.

  18. Application of biosphere models in the Biomosa project: a comparative assessment of five European radioactive waste disposal sites

    International Nuclear Information System (INIS)

    Kowe, R.; Mobbs, S.; Proehl, G.; Bergstrom, U.; Kanyar, B.; Olyslaegers, G.; Zeevaert, T.; Simon, I.

    2004-01-01

    The BIOMOSA (Biosphere Models for Safety Assessment of Radioactive Waste Disposal) project is a part of the EC fifth framework research programme. The main goal of this project is the improvement of the scientific basis for the application of biosphere models in the framework of long-term safety studies of radioactive waste disposal facilities. Furthermore, the outcome of the project will provide operators and regulatory bodies with guidelines for performance assessments of repository systems. The study focuses on the development and application of site-specific models and a generic biosphere tool BIOGEM (Biosphere Generic Model), using the experience from the national programmes and the IAEA BIOMASS reference biosphere methodology. The models were applied to 5 typical locations in the EU, resulting in estimates of the annual individual doses to the critical groups and the ranking of the importance of the pathways for each of the sites. The results of the site-specific and generic models were then compared. In all cases the doses calculated by the generic model were less than the doses obtained from the site-specific models. Uncertainty in the results was estimated by means of stochastic calculations which allow a comparison of the overall model uncertainty with the variability across the different sites considered. (author)

  19. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    Science.gov (United States)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  20. Influence of a modified preservation solution in kidney transplantation: A comparative experimental study in a porcine model

    Directory of Open Access Journals (Sweden)

    Mohammad Golriz

    2017-03-01

    Conclusion: Although the new preservation HTK solution is in several points a well-thought-out modification of the standard HTK solution, its preservation efficacy, at least for kidney preservation in a pig model for 30 hours, seems to be comparable to the current used solutions. A real advantage, however, could be confirmed in clinical settings, where marginal organs may influence the clinical outcome.

  1. Differentiated risk models in portfolio optimization: a comparative analysis of the degree of diversification and performance in the São Paulo Stock Exchange (BOVESPA

    Directory of Open Access Journals (Sweden)

    Ivan Ricardo Gartner

    2012-08-01

    Full Text Available Faced with so many risk modeling alternatives in portfolio optimization, several questions arise regarding their legitimacy, utility and applicability. In particular, a question arises involving the adherence of the alternative models with regard to the basic presupposition of Markowitz's classical model, with regard to the concept of diversification as a means of controlling the relationship between risk and return within a process of optimization. In this context, the aim of this article is to explore the risk-differentiated configurations that entropy can provide, from the point of view of the repercussions that these have on the degree of diversification and on portfolios performance. The reach of this objective requires that a comparative analysis is made between models that include entropy in their formulation and the classic Markowitz model. In order to contribute to this debate, this article proposes that adaptations are made to the models of relative minimum entropy and of maximum entropy, so that these can be applied to investment portfolio optimizations. The comparative analysis was based on performance indicators and on a ratio of the degree of portfolio diversification. The portfolios were formed by considering a sample of fourteen assets that compose the IBOVESPA, which were projected during the period from January 2007 to December 2009, and took into account the matrices of covariance that were formed as from January 1999. When comparing the Markowitz model with two models that were constructed to represent new risk configurations based on entropy optimization, the present study concluded that the first model was far superior to the others. Not only did the Markowitz model present better accumulated nominal yields, it also presented a far greater predictive efficiency and better effective performance, when considering the trade-off between risk and return. However, with regards to diversification, the Markowitz model concentrated

  2. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    Directory of Open Access Journals (Sweden)

    Roslyn De Braine

    2011-05-01

    Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component of work engagement. Currently no literature indicates that the JD-R model has been used to predict work-based identity. Research design: A census-based survey was conducted amongst a target population of 23134 employees that yielded a sample of 2429 (a response rate of about 10.5%. The Job Demands- Resources scale (JDRS was used to measure job demands and job resources. A work-based identity scale was developed for this study. Work engagement was studied with the Utrecht Work Engagement Scale (UWES. Factor and reliability analyses were conducted on the scales and general multiple regression models were used in the predictive analyses. Main findings: The JD-R model yielded a greater amount of variance in dedication than in work engagement. It, however, yielded the greatest amount of variance in work-based identity, with job resources being its strongest predictor. Practical/managerial implications: Identification and work engagement levels can be improved by managing job resources and demands. Contribution/value-add: This study builds on the literature of the JD-R model by showing that it can be used to predict work-based identity.

  3. A Comparative Study of Three Methodologies for Modeling Dynamic Stall

    Science.gov (United States)

    Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.

    2002-01-01

    During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.

  4. Autologous Stem Cell Transplantation in Patients With Multiple Myeloma: An Activity-based Costing Analysis, Comparing a Total Inpatient Model Versus an Early Discharge Model.

    Science.gov (United States)

    Martino, Massimo; Console, Giuseppe; Russo, Letteria; Meliado', Antonella; Meliambro, Nicola; Moscato, Tiziana; Irrera, Giuseppe; Messina, Giuseppe; Pontari, Antonella; Morabito, Fortunato

    2017-08-01

    Activity-based costing (ABC) was developed and advocated as a means of overcoming the systematic distortions of traditional cost accounting. We calculated the cost of high-dose chemotherapy and autologous stem cell transplantation (ASCT) in patients with multiple myeloma using the ABC method, through 2 different care models: the total inpatient model (TIM) and the early-discharge outpatient model (EDOM) and compared this with the approved diagnosis related-groups (DRG) Italian tariffs. The TIM and EDOM models involved a total cost of €28,615.15 and €16,499.43, respectively. In the TIM model, the phase with the greatest economic impact was the posttransplant (recovery and hematologic engraftment) with 36.4% of the total cost, whereas in the EDOM model, the phase with the greatest economic impact was the pretransplant (chemo-mobilization, apheresis procedure, cryopreservation, and storage) phase, with 60.4% of total expenses. In an analysis of each episode, the TIM model comprised a higher absorption than the EDOM. In particular, the posttransplant represented 36.4% of the total costs in the TIM and 17.7% in EDOM model, respectively. The estimated reduction in cost per patient using an EDOM model was over €12,115.72. The repayment of the DRG in Calabrian Region for the ASCT procedure is €59,806. Given the real cost of the transplant, the estimated cost saving per patient is €31,190.85 in the TIM model and €43,306.57 in the EDOM model. In conclusion, the actual repayment of the DRG does not correspond to the real cost of the ASCT procedure in Italy. Moreover, using the EDOM, the cost of ASCT is approximately the half of the TIM model. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Ecosystem structure and fishing impacts in the northwestern Mediterranean Sea using a food web model within a comparative approach

    Science.gov (United States)

    Corrales, Xavier; Coll, Marta; Tecchio, Samuele; Bellido, José María; Fernández, Ángel Mario; Palomera, Isabel

    2015-08-01

    We developed an ecological model to characterize the structure and functioning of the marine continental shelf and slope area of the northwestern Mediterranean Sea, from Toulon to Cape La Nao (NWM model), in the early 2000s. The model included previously modeled areas in the NW Mediterranean (the Gulf of Lions and the Southern Catalan Sea) and expanded their ranges, covering 45,547 km2, with depths from 0 to 1000 m. The study area was chosen to specifically account for the connectivity between the areas and shared fish stocks and fleets. Input data were based on local scientific surveys and fishing statistics, published data on stomach content analyses, and the application of empirical equations to estimate consumption and production rates. The model was composed of 54 functional groups, from primary producers to top predators, and Spanish and French fishing fleets were considered. Results were analyzed using ecological indicators and compared with outputs from ecosystem models developed in the Mediterranean Sea and the Gulf of Cadiz prior to this study. Results showed that the main trophic flows were associated with detritus, phytoplankton, zooplankton and benthic invertebrates. Several high trophic level organisms (such as dolphins, benthopelagic cephalopods, large demersal fishes from the continental shelf, and other large pelagic fishes), and the herbivorous salema fish, were identified as keystone groups within the ecosystem. Results confirmed that fishing impact was high and widespread throughout the food web. The comparative approach highlighted that, despite productivity differences, the ecosystems shared common features in structure and functioning traits such as the important role of detritus, the dominance of the pelagic fraction in terms of flows and the importance of benthic-pelagic coupling.

  6. Comparing the radiosensitivity of cervical and thoracic spinal cord using the relative seriality model

    International Nuclear Information System (INIS)

    Adamus-Gorka, M.; Lind, B.K.; Brahme, A.

    2003-01-01

    Spinal cord is one of the most important normal tissues that are aimed to be spared during radiation therapy of cancer. This organ has been known for its strongly serial character and its high sensitivity to radiation. In order to compare the sensitivity of different parts of spinal cord, the early data (1970's) for radiation myelopathy available in the literature could be used. In the present study the relative seriality model (Kallman et al. 1992) has been fitted to two different sets of clinical data for spinal cord irradiation: radiation myelitis of cervical spinal cord after treating 248 patients for malignant disease of head and neck (Abbatucci et al. 1978) and radiation myelitis of thoracic spinal cord after radiation treating 43 patients with lung carcinoma (Reinhold et al. 1976). The maximum likelihood method was applied for the fitting and the corresponding parameters together with their 68% confidence intervals calculated for each of the datasets respectively. The alpha-beta ratio for the thoracic survival was also obtained. On the basis of the present study the following conclusions can be drawn: 1. radiation myelopathy is a strongly serial endpoint, 2. it appears to be differences in radiosensitivity between the cervical and thoracic region of spinal cord, 3. thoracic spinal cord revealed very serial characteristic of dose response, while the cervical myelopathy seems to be a bit less serial endpoint, 4. the dose-response curve is much steeper in case of myelopathy of cervical spinal cord, due to the much higher gamma value for this region. This work compares the fitting of NTCP model to the cervical and thoracic regions of the spinal cord and shows quite different responses. In the future more data should be tested for better understanding the mechanism of spinal cord sensitivity to radiation

  7. From neurons to nests: nest-building behaviour as a model in behavioural and comparative neuroscience.

    Science.gov (United States)

    Hall, Zachary J; Meddle, Simone L; Healy, Susan D

    Despite centuries of observing the nest building of most extant bird species, we know surprisingly little about how birds build nests and, specifically, how the avian brain controls nest building. Here, we argue that nest building in birds may be a useful model behaviour in which to study how the brain controls behaviour. Specifically, we argue that nest building as a behavioural model provides a unique opportunity to study not only the mechanisms through which the brain controls behaviour within individuals of a single species but also how evolution may have shaped the brain to produce interspecific variation in nest-building behaviour. In this review, we outline the questions in both behavioural and comparative neuroscience that nest building could be used to address, summarize recent findings regarding the neurobiology of nest building in lab-reared zebra finches and across species building different nest structures, and suggest some future directions for the neurobiology of nest building.

  8. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  9. Investigation of Intravenous Hydroxocobalamin Compared to Hextend for Resuscitation in a Swine Model of Uncontrolled Hemorrhagic Shock: A Preliminary Report

    Science.gov (United States)

    2017-08-27

    in blood loss from the injury (1005 vs 1100 ml). There was a significant difference by time between groups (pɘ.5) post treatment. No significant...effective as IV Hextend® in improving systolic blood pressure (SBP) in a controlled hemorrhagic shock model. We aimed to compare IV hydroxocobalamin (HOC...volume, portable drug that improves blood pressure and survival. Objective To compare systolic blood pressure over time in swine that have

  10. A comparative study of two food model systems to test the survival of Campylobacter jejuni at -18 degrees C

    DEFF Research Database (Denmark)

    Birk, Tina; Rosenquist, Hanne; Brondsted, L.

    2006-01-01

    The survival of Campylobacter jejuni NCTC 11168 was tested at freezing conditions (-18 degrees C) over a period of 32 days in two food models that simulated either (i) the chicken skin surface (skin model) or (ii) the chicken juice in and around a broiler carcass (liquid model). In the skin model...... NCTC 11168 cells was slower when suspended in chicken juice than in BHIB. After freezing for 32 days, the reductions in the cell counts were 1.5 log CFU/ml in chicken juice and 3.5 log CFU/ml in BHIB. After the same time of freezing but when inoculated onto chicken skin, C. jejuni NCTC 11168...... was reduced by 2.2 log units when inoculated in chicken juice and 3.2 log units when inoculated into BHIB. For both models, the major decrease occurred within the first 24 h of freezing. The results obtained in the liquid model with chicken juice were comparable to the reductions of Campylobacter observed...

  11. A comparative evaluation of risk-adjustment models for benchmarking amputation-free survival after lower extremity bypass.

    Science.gov (United States)

    Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres

    2016-04-01

    Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model

  12. Cancer cells growing on perfused 3D collagen model produced higher reactive oxygen species level and were more resistant to cisplatin compared to the 2D model.

    Science.gov (United States)

    Liu, Qingxi; Zhang, Zijiang; Liu, Yupeng; Cui, Zhanfeng; Zhang, Tongcun; Li, Zhaohui; Ma, Wenjian

    2018-03-01

    Three-dimensional (3D) collagen scaffold models, due to their ability to mimic the tissue and organ structure in vivo, have received increasing interest in drug discovery and toxicity evaluation. In this study, we developed a perfused 3D model and studied cellular response to cytotoxic drugs in comparison with traditional 2D cell cultures as evaluated by cancer drug cisplatin. Cancer cells grown in perfused 3D environments showed increased levels of reactive oxygen species (ROS) production compared to the 2D culture. As determined by growth analysis, cells in the 3D culture, after forming a spheroid, were more resistant to the cancer drug cisplatin compared to that of the 2D cell culture. In addition, 3D culturing cells showed elevated level of ROS, indicating a physiological change or the formation of a microenvironment that resembles tumor cells in vivo. These data revealed that cellular response to drugs for cells growing in 3D environments are dramatically different from that of 2D cultured cells. Thus, the perfused 3D collagen scaffold model we report here might be a potentially very useful tool for drug analysis.

  13. A comparative Study between GoldSim and AMBER Based Biosphere Assessment Models for an HLW Repository

    International Nuclear Information System (INIS)

    Lee, Youn-Myoung; Hwang, Yong-Soo

    2007-01-01

    To demonstrate the performance of a repository, the dose exposure rate to human being due to long-term nuclide releases from a high-level waste repository (HLW) should be evaluated and the results compared to the dose limit presented by the regulatory bodies. To evaluate such a dose rate to an individual, biosphere assessment models have been developed and implemented for a practical calculation with the aid of such commercial tools as AMBER and GoldSim, both of which are capable of probabilistic and deterministic calculation. AMBER is a general purpose compartment modeling tool and GoldSim is another multipurpose simulation tool for dynamically modeling complex systems, supporting a higher graphical user interface than AMBER and a postprocessing feature. And also unlike AMBER, any kind of compartment scheme can be rather simply constructed with an appropriate transition rate between compartments, GoldSim is designed to facilitate the object-oriented modules to address any specialized programs, similar to solving jig saw puzzles. During the last couple of years a compartment modeling approach for a biosphere has been mainly carried out with AMBER in KAERI in order to conservatively or rather roughly provide dose conversion factors to get the final exposure rate due to a nuclide flux into biosphere over various geosphere-biosphere interfaces (GBIs) calculated through nuclide transport modules. This caused a necessity for a newly devised biosphere model that could be coupled to a nuclide transport model with less conservatism in the frame of the development of a total system performance assessment modeling tool, which could be successfully done with the aid of GoldSim. Therefore, through the current study, some comparison results of the AMBER and the GoldSim approaches for the same case of a biosphere modeling without any consideration of geosphere transport are introduced by extending a previous study

  14. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  15. Comparing planar image quality of rotating slat and parallel hole collimation: influence of system modeling

    International Nuclear Information System (INIS)

    Holen, Roel van; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace

    2008-01-01

    The main remaining challenge for a gamma camera is to overcome the existing trade-off between collimator spatial resolution and system sensitivity. This problem, strongly limiting the performance of parallel hole collimated gamma cameras, can be overcome by applying new collimator designs such as rotating slat (RS) collimators which have a much higher photon collection efficiency. The drawback of a RS collimated gamma camera is that, even for obtaining planar images, image reconstruction is needed, resulting in noise accumulation. However, nowadays iterative reconstruction techniques with accurate system modeling can provide better image quality. Because the impact of this modeling on image quality differs from one system to another, an objective assessment of the image quality obtained with a RS collimator is needed in comparison to classical projection images obtained using a parallel hole (PH) collimator. In this paper, a comparative study of image quality, achieved with system modeling, is presented. RS data are reconstructed to planar images using maximum likelihood expectation maximization (MLEM) with an accurate Monte Carlo derived system matrix while PH projections are deconvolved using a Monte Carlo derived point-spread function. Contrast-to-noise characteristics are used to show image quality for cold and hot spots of varying size. Influence of the object size and contrast is investigated using the optimal contrast-to-noise ratio (CNR o ). For a typical phantom setup, results show that cold spot imaging is slightly better for a PH collimator. For hot spot imaging, the CNR o of the RS images is found to increase with increasing lesion diameter and lesion contrast while it decreases when background dimensions become larger. Only for very large background dimensions in combination with low contrast lesions, the use of a PH collimator could be beneficial for hot spot imaging. In all other cases, the RS collimator scores better. Finally, the simulation of a

  16. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  17. A Model for Comparing Free Cloud Platforms

    Directory of Open Access Journals (Sweden)

    Radu LIXANDROIU

    2014-01-01

    Full Text Available VMware, VirtualBox, Virtual PC and other popular desktop virtualization applications are used only by a few users of IT techniques. This article attempts to make a comparison model for choosing the best cloud platform. Many virtualization applications such as VMware (VMware Player, Oracle VirtualBox and Microsoft Virtual PC are free for home users. The main goal of the virtualization software is that it allows users to run multiple operating systems simultane-ously on one virtual environment, using one computer desktop.

  18. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  19. Comparative performance of diabetes-specific and general population-based cardiovascular risk assessment models in people with diabetes mellitus.

    Science.gov (United States)

    Echouffo-Tcheugui, J-B; Kengne, A P

    2013-10-01

    Multivariable models for estimating cardiovascular disease (CVD) risk in people with diabetes comprise general population-based models and those from diabetic cohorts. Whether one set of models should receive preference is unclear. We evaluated the evidence on direct comparisons of the performance of general population vs diabetes-specific CVD risk models in people with diabetes. MEDLINE and EMBASE databases were searched up to March 2013. Two reviewers independently identified studies that compared the performance of general CVD models vs diabetes-specific ones in the same group of people with diabetes. Independent, dual data extraction on study design, risk models, outcomes; and measures of performance was conducted. Eleven articles reporting on 22 pair wise comparisons of a diabetes-specific model (UKPDS, ADVANCE and DCS risk models) to a general population model (three variants of the Framingham model, Prospective Cardiovascular Münster [PROCAM] score, CardioRisk Manager [CRM], Joint British Societies Coronary Risk Chart [JBSRC], Progetto Cuore algorithm and the CHD-Riskard algorithm) were eligible. Absolute differences in C-statistic of diabetes-specific vs general population-based models varied from -0.13 to 0.09. Comparisons for other performance measures were unusual. Outcomes definitions were congruent with those applied during model development. In 14 comparisons, the UKPDS, ADVANCE or DCS diabetes-specific models were superior to the general population CVD risk models. Authors reported better C-statistic for models they developed. The limited existing evidence suggests a possible discriminatory advantage of diabetes-specific over general population-based models for CVD risk stratification in diabetes. More robust head-to-head comparisons are needed to confirm this trend and strengthen recommendations. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  20. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  1. Determining the Best Arch/Garch Model and Comparing JKSE with Stock Index in Developed Countries

    Directory of Open Access Journals (Sweden)

    Kharisya Ayu Effendi

    2015-09-01

    Full Text Available The slow movement of Indonesia economic growth in 2014 due to several factors, in internal factors; due to the high interest rates in Indonesia and external factors from the US which will raise the fed rate this year. However, JKSE shows a sharp increase trend from the beginning of 2014 until the second quarter of 2015 although it remains fluctuate but insignificant. The purpose of this research is to determine the best ARCH/ GARCH model in JKSE and stock index in developed countries (FTSE, Nasdaq and STI and then compare the JKSE with the stock index in developed countries (FTSE, Nasdaq and STI. The results obtained in this study is to determine the best model of ARCH / GARCH, it is obtained that JKSE is GARCH (1,2, while the FTSE obtains GARCH (2,2, NASDAQ produces the best model which is GARCH (1,1 and STI with GARCH (2,1, and the results of the comparison of JKSE with FTSE, NASDAQ and STI are that even though JKSE fluctuates with moderate levels but the trend shown upward trend. This is different with other stock indexes fluctuated highly and tends to have a downward trend.

  2. Comparing the Performance of Commonly Available Digital Elevation Models in GIS-based Flood Simulation

    Science.gov (United States)

    Ybanez, R. L.; Lagmay, A. M. A.; David, C. P.

    2016-12-01

    With climatological hazards increasing globally, the Philippines is listed as one of the most vulnerable countries in the world due to its location in the Western Pacific. Flood hazards mapping and modelling is one of the responses by local government and research institutions to help prepare for and mitigate the effects of flood hazards that constantly threaten towns and cities in floodplains during the 6-month rainy season. Available digital elevation maps, which serve as the most important dataset used in 2D flood modelling, are limited in the Philippines and testing is needed to determine which of the few would work best for flood hazards mapping and modelling. Two-dimensional GIS-based flood modelling with the flood-routing software FLO-2D was conducted using three different available DEMs from the ASTER GDEM, the SRTM GDEM, and the locally available IfSAR DTM. All other parameters kept uniform, such as resolution, soil parameters, rainfall amount, and surface roughness, the three models were run over a 129-sq. kilometer watershed with only the basemap varying. The output flood hazard maps were compared on the basis of their flood distribution, extent, and depth. The ASTER and SRTM GDEMs contained too much error and noise which manifested as dissipated and dissolved hazard areas in the lower watershed where clearly delineated flood hazards should be present. Noise on the two datasets are clearly visible as erratic mounds in the floodplain. The dataset which produced the only feasible flood hazard map is the IfSAR DTM which delineates flood hazard areas clearly and properly. Despite the use of ASTER and SRTM with their published resolution and accuracy, their use in GIS-based flood modelling would be unreliable. Although not as accessible, only IfSAR or better datasets should be used for creating secondary products from these base DEM datasets. For developing countries which are most prone to hazards, but with limited choices for basemaps used in hazards

  3. Comparing the cardiovascular therapeutic indices of glycopyrronium and tiotropium in an integrated rat pharmacokinetic, pharmacodynamic and safety model

    International Nuclear Information System (INIS)

    Trifilieff, Alexandre; Ethell, Brian T.; Sykes, David A.; Watson, Kenny J.; Collingwood, Steve; Charlton, Steven J.; Kent, Toby C.

    2015-01-01

    Long acting inhaled muscarinic receptor antagonists, such as tiotropium, are widely used as bronchodilator therapy for chronic obstructive pulmonary disease (COPD). Although this class of compounds is generally considered to be safe and well tolerated in COPD patients the cardiovascular safety of tiotropium has recently been questioned. We describe a rat in vivo model that allows the concurrent assessment of muscarinic antagonist potency, bronchodilator efficacy and a potential for side effects, and we use this model to compare tiotropium with NVA237 (glycopyrronium bromide), a recently approved inhaled muscarinic antagonist for COPD. Anaesthetized Brown Norway rats were dosed intratracheally at 1 or 6 h prior to receiving increasing doses of intravenous methacholine. Changes in airway resistance and cardiovascular function were recorded and therapeutic indices were calculated against the ED 50 values for the inhibition of methacholine-induced bronchoconstriction. At both time points studied, greater therapeutic indices for hypotension and bradycardia were observed with glycopyrronium (19.5 and 28.5 fold at 1 h; > 200 fold at 6 h) than with tiotropium (1.5 and 4.2 fold at 1 h; 4.6 and 5.5 fold at 6 h). Pharmacokinetic, protein plasma binding and rat muscarinic receptor binding properties for both compounds were determined and used to generate an integrated model of systemic M 2 muscarinic receptor occupancy, which predicted significantly higher M 2 receptor blockade at ED 50 doses with tiotropium than with glycopyrronium. In our preclinical model there was an improved safety profile for glycopyrronium when compared with tiotropium. - Highlights: • We use an in vivo rat model to study CV safety of inhaled muscarinic antagonists. • We integrate protein and receptor binding and PK of tiotropium and glycopyrrolate. • At ED 50 doses for bronchoprotection we model systemic M 2 receptor occupancy. • Glycopyrrolate demonstrates lower M 2 occupancy at

  4. Comparing the cardiovascular therapeutic indices of glycopyrronium and tiotropium in an integrated rat pharmacokinetic, pharmacodynamic and safety model

    Energy Technology Data Exchange (ETDEWEB)

    Trifilieff, Alexandre; Ethell, Brian T. [Respiratory Disease Area, Novartis Institutes for Biomedical Research, Wimblehurst Road, Horsham, West Sussex RH12 5AB (United Kingdom); Sykes, David A. [Respiratory Disease Area, Novartis Institutes for Biomedical Research, Wimblehurst Road, Horsham, West Sussex RH12 5AB (United Kingdom); School of Life Sciences, Queen' s Medical Centre, University of Nottingham, Nottingham, NG7 2UH (United Kingdom); Watson, Kenny J.; Collingwood, Steve [Respiratory Disease Area, Novartis Institutes for Biomedical Research, Wimblehurst Road, Horsham, West Sussex RH12 5AB (United Kingdom); Charlton, Steven J. [Respiratory Disease Area, Novartis Institutes for Biomedical Research, Wimblehurst Road, Horsham, West Sussex RH12 5AB (United Kingdom); School of Life Sciences, Queen' s Medical Centre, University of Nottingham, Nottingham, NG7 2UH (United Kingdom); Kent, Toby C., E-mail: tobykent@me.com [Respiratory Disease Area, Novartis Institutes for Biomedical Research, Wimblehurst Road, Horsham, West Sussex RH12 5AB (United Kingdom)

    2015-08-15

    Long acting inhaled muscarinic receptor antagonists, such as tiotropium, are widely used as bronchodilator therapy for chronic obstructive pulmonary disease (COPD). Although this class of compounds is generally considered to be safe and well tolerated in COPD patients the cardiovascular safety of tiotropium has recently been questioned. We describe a rat in vivo model that allows the concurrent assessment of muscarinic antagonist potency, bronchodilator efficacy and a potential for side effects, and we use this model to compare tiotropium with NVA237 (glycopyrronium bromide), a recently approved inhaled muscarinic antagonist for COPD. Anaesthetized Brown Norway rats were dosed intratracheally at 1 or 6 h prior to receiving increasing doses of intravenous methacholine. Changes in airway resistance and cardiovascular function were recorded and therapeutic indices were calculated against the ED{sub 50} values for the inhibition of methacholine-induced bronchoconstriction. At both time points studied, greater therapeutic indices for hypotension and bradycardia were observed with glycopyrronium (19.5 and 28.5 fold at 1 h; > 200 fold at 6 h) than with tiotropium (1.5 and 4.2 fold at 1 h; 4.6 and 5.5 fold at 6 h). Pharmacokinetic, protein plasma binding and rat muscarinic receptor binding properties for both compounds were determined and used to generate an integrated model of systemic M{sub 2} muscarinic receptor occupancy, which predicted significantly higher M{sub 2} receptor blockade at ED{sub 50} doses with tiotropium than with glycopyrronium. In our preclinical model there was an improved safety profile for glycopyrronium when compared with tiotropium. - Highlights: • We use an in vivo rat model to study CV safety of inhaled muscarinic antagonists. • We integrate protein and receptor binding and PK of tiotropium and glycopyrrolate. • At ED{sub 50} doses for bronchoprotection we model systemic M{sub 2} receptor occupancy. • Glycopyrrolate demonstrates lower M

  5. Modeling and experimental study of a corrugated wick type solar still: Comparative study with a simple basin type

    International Nuclear Information System (INIS)

    Matrawy, K.K.; Alosaimy, A.S.; Mahrous, A.-F.

    2015-01-01

    Highlights: • Performance of corrugated wick type solar still is compared with simple type. • Corrugated porous surface contributes by about 75% of the total productivity. • Productivity of corrugated solar still was 34% more than that for simple type. - Abstract: In the present work, the productivity of a solar still is modified by forming the evaporative surface as a corrugated shape as well as by decreasing the heat capacity with the use of a porous material. This target has been achieved by using black clothes in a corrugated shape that are immersed in water where the clothes absorbs water and get saturated by capillary effect. Along with the proposed corrugated wick type solar still, a simple basin still type was fabricated and tested to compare the enhancement accomplished by the developed solar still. Inclined reflectors were used to augment the solar radiation incident on the plane of the developed solar stills. The energy balance in the developed mathematical models takes into consideration the glass covers, the porous material, along with the portion of water exposed to the transmitted solar radiation as well as the portion of water shaded by the corrugated surface. The developed mathematical model was validated by fabricating and testing two models for the proposed and simple basin solar stills under the same conditions. Good agreement between the simulated and experimental results has been detected. It has been found that an improvement of about 34% in the productivity for the proposed wick type solar still is gained as compared to the simple basin case. Also, the best tilt angle for the inclined reflector has been found to be about 30° with respect to the vertical direction of the setup under consideration.

  6. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  7. Comparative cost-effectiveness of Option B+ for prevention of mother to child transmission of HIV in Malawi: Mathematical modelling study

    Science.gov (United States)

    Tweya, Hannock; Keiser, Olivia; Haas, Andreas D.; Tenthani, Lyson; Phiri, Sam; Egger, Matthias; Estill, Janne

    2016-01-01

    Objective To estimate the cost-effectiveness of prevention of mother to child transmission (MTCT) of HIV with lifelong antiretroviral therapy (ART) for pregnant and breastfeeding women (‘Option B+’) compared to ART during pregnancy or breastfeeding only unless clinically indicated (‘Option B’). Design Mathematical modelling study of first and second pregnancy, informed by data from the Malawi Option B+ programme. Methods Individual-based simulation model. We simulated cohorts of 10,000 women and their infants during two subsequent pregnancies, including the breastfeeding period, with either Option B+ or B. We parameterised the model with data from the literature and by analysing programmatic data. We compared total costs of ante-natal and post-natal care, and lifetime costs and disability-adjusted life-years (DALYs) of the infected infants between Option B+ and Option B. Results During the first pregnancy, 15% of the infants born to HIV-infected mothers acquired the infection. With Option B+, 39% of the women were on ART at the beginning of the second pregnancy, compared to 18% with Option B. For second pregnancies, the rates MTCT were 11.3% with Option B+ and 12.3% with Option B. The incremental cost-effectiveness ratio comparing the two options ranged between about US$ 500 and US$ 1300 per DALY averted. Conclusion Option B+ prevents more vertical transmissions of HIV than Option B, mainly because more women are already on ART at the beginning of the next pregnancy. Option B+ is a cost-effective strategy for PMTCT if the total future costs and lost lifetime of the infected infants are taken into account. PMID:26691682

  8. Comprehensive stroke units: a review of comparative evidence and experience.

    Science.gov (United States)

    Chan, Daniel K Y; Cordato, Dennis; O'Rourke, Fintan; Chan, Daniel L; Pollack, Michael; Middleton, Sandy; Levi, Chris

    2013-06-01

    Stroke unit care offers significant benefits in survival and dependency when compared to general medical ward. Most stroke units are either acute or rehabilitation, but comprehensive (combined acute and rehabilitation) model (comprehensive stroke unit) is less common. To examine different levels of evidence of comprehensive stroke unit compared to other organized inpatient stroke care and share local experience of comprehensive stroke units. Cochrane Library and Medline (1980 to December 2010) review of English language articles comparing stroke units to alternative forms of stroke care delivery, different types of stroke unit models, and differences in processes of care within different stroke unit models. Different levels of comparative evidence of comprehensive stroke units to other models of stroke units are collected. There are no randomized controlled trials directly comparing comprehensive stroke units to other stroke unit models (either acute or rehabilitation). Comprehensive stroke units are associated with reduced length of stay and greatest reduction in combined death and dependency in a meta-analysis study when compared to other stroke unit models. Comprehensive stroke units also have better length of stay and functional outcome when compared to acute or rehabilitation stroke unit models in a cross-sectional study, and better length of stay in a 'before-and-after' comparative study. Components of stroke unit care that improve outcome are multifactorial and most probably include early mobilization. A comprehensive stroke unit model has been successfully implemented in metropolitan and rural hospital settings. Comprehensive stroke units are associated with reductions in length of stay and combined death and dependency and improved functional outcomes compared to other stroke unit models. A comprehensive stroke unit model is worth considering as the preferred model of stroke unit care in the planning and delivery of metropolitan and rural stroke services

  9. Comparing models of borderline personality disorder: Mothers' experience, self-protective strategies, and dispositional representations.

    Science.gov (United States)

    Crittenden, Patricia M; Newman, Louise

    2010-07-01

    This study compared aspects of the functioning of mothers with borderline personality disorder (BPD) to those of mothers without psychiatric disorder using two different conceptualizations of attachment theory. The Adult Attachment Interviews (AAIs) of 32 mothers were classified using both the Main and Goldwyn method (M&G) and the Dynamic-Maturational Model method (DMM). We found that mothers with BPD recalled more danger, reported more negative effects of danger, and gave evidence of more unresolved psychological trauma tied to danger than other mothers. We also found that the DMM classifications discriminated between the two groups of mothers better than the M&G classifications. Using the DMM method, the AAIs of BPD mothers were more complex, extreme, and had more indicators of rapid shifts in arousal than those of other mothers. Representations drawn from the AAI, using either classificatory method, did not match the representations of the mother's child drawn from the Working Model of the Child Interview; mothers with very anxious DMM classifications were paired with secure-balanced child representations. We propose that the DMM offers greater clinical utility, conceptual coherence, empirical validity, and coder reliability than the M&G.

  10. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    Science.gov (United States)

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A comparative study of approaches to direct methanol fuel cells modelling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, V.B.; Falcao, D.S.; Pinto, A.M.F.R. [Centro de Estudos de Fenomenos de Transporte, Departamento de Eng. Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal); Rangel, C.M. [Instituto Nacional de Engenharia, Tecnologia e Inovacao, Paco do Lumiar, 22,1649-038 (Portugal)

    2007-03-15

    Fuel cell modelling has received much attention over the past decade in an attempt to better understand the phenomena occurring within the cell. Mathematical models and simulation are needed as tools for design optimization of fuel cells, stacks and fuel cell power systems. Analytical, semi-empirical and mechanistic models for direct methanol fuel cells (DMFC) are reviewed. Effective models were until now developed describing the fundamental electrochemical and transport phenomena taking place in the cell. More research is required to develop models that can account for the two-phase flows occurring in the anode and cathode of the DMFC. The merits and demerits of the models are presented. Selected models of different categories are implemented and discussed. Finally, one of the selected simplified models is proposed as a computer-aided tool for real-time system level DMFC calculations. (author)

  12. A dismantling study of assertive outreach services: comparing activity and outcomes following replacement with the FACT model.

    Science.gov (United States)

    Firn, Mike; Hindhaugh, Keelyjo; Hubbeling, Dieneke; Davies, Gwyn; Jones, Ben; White, Sarah Jane

    2013-06-01

    Financial constraints and some disappointing research evaluations have seen English assertive outreach (AO) teams subject to remodelling, decommissioning and integration into standard care. We tested a specific alternative model of integrating the AO function from two AO teams into six standard community mental health teams (CMHT). The Flexible Assertive Community Treatment model (FACT) was adopted from the Netherlands (Van Veldhuizen, Commun Mental Health J 43(4):421-433, 2007; Bond and Drake, Commun Mental Health J 43(4):435-438, 2007). We aimed to demonstrate non-inferiority in clinical effectiveness and thereby show cost efficiencies associated with FACT. Outcomes were compared in a mirror-image study of the 12 months periods pre- and post-service change with eligible individuals from the AO teams' caseloads (n = 112) acting as their own controls. We also conducted a cost-consequence analysis of the changes. Outcome data regarding admissions, use of crisis and home treatment, frequency of contact and DNA rate were extracted from the electronic patient record. The results show AO patients (n = 112) transferred to standard CMHTs with FACT had significantly fewer admissions and a halving of bed use (21 fewer admission and 2,394 fewer occupied bed days) whilst being in receipt of a less intensive service (2,979 fewer contacts). This was offset by significantly poorer engagement but not by increased use of crisis and home treatment services. Enhancing multi-disciplinary CMHTs with FACT provides a clinically effective alternative to AO teams. FACT offers a cost-effective model compared to AO.

  13. The importance of human cognitive models in the safety analysis report of nuclear power plants - a comparative review

    International Nuclear Information System (INIS)

    Alvarenga, Marco A.B.; Araujo Goes, Alexandre G. de

    1997-01-01

    The chapter 18 of the Brazilian NPPs Safety Analysis Report (SAR) deals with Human Factor Engineering (HFE). The chapter evaluation is distributed among ten topics. One of them, the HRA (Human Reliability Analysis) becomes the central subject of the whole analysis, generating information to the other topics, as for example, high risk operational critical sequences. The HRA methods used in the past concerned the approach of modeling the human being as a component (hardware), based in a failure or success bivalent logic. In the last ten years, several human cognitive models were developed to be used in the nuclear field as well as in the conventional industry, mainly in the military aviation. In this paper, we describe their main features, comparing some models to each other, with the main purpose of determining the minimal characteristics acceptable for NPPs licensing, being part of these cognitive models, to be used mainly in the evaluation of HRAs from SARs in the NPPs. (author). 10 refs

  14. A comparative study of the antacid effect of raw spinach juice and spinach extract in an artificial stomach model.

    Science.gov (United States)

    Panda, Vandana Sanjeev; Shinde, Priyanka Mangesh

    2016-12-01

    BackgroundSpinacia oleracea known as spinach is a green-leafy vegetable consumed by people across the globe. It is reported to possess potent medicinal properties by virtue of its numerous antioxidant phytoconstituents, together termed as the natural antioxidant mixture (NAO). The present study compares the antacid effect of raw spinach juice with an antioxidant-rich methanolic extract of spinach (NAOE) in an artificial stomach model. MethodsThe pH of NAOE at various concentrations (50, 100 and 200 mg/mL) and its neutralizing effect on artificial gastric acid was determined and compared with that of raw spinach juice, water, the active control sodium bicarbonate (SB) and a marketed antacid preparation ENO. A modified model of Vatier's artificial stomach was used to determine the duration of consistent neutralization of artificial gastric acid for the test compounds. The neutralizing capacity of test compounds was determined in vitro using the classical titration method of Fordtran. Results NAOE (50, 100 and 200 mg/mL), spinach juice, SB and ENO showed significantly better acid-neutralizing effect, consistent duration of neutralization and higher antacid capacity when compared with water. Highest antacid activity was demonstrated by ENO and SB followed by spinach juice and NAOE200. Spinach juice exhibited an effect comparable to NAOE (200 mg/mL). ConclusionsThus, it may be concluded that spinach displays significant antacid activity be it in the raw juice form or as an extract in methanol.

  15. Competitive versus comparative advantage

    OpenAIRE

    Neary, J. Peter

    2002-01-01

    I explore the interactions between comparative, competitive and absolute advantage in a two-country model of oligopoly in general equilibrium. Comparative advantage always determines the direction of trade, but both competitive and absolute advantage affect resource allocation, trade patterns and trade volumes. Competitive advantage in the sense of more home firms drives foreign firms out of marginal sectors but also makes some marginal home sectors uncompetitive. Absolute advantage in the se...

  16. Comparative analysis of magnetic resonance in the polaron pair recombination and the triplet exciton-polaron quenching models

    Science.gov (United States)

    Mkhitaryan, V. V.; Danilović, D.; Hippola, C.; Raikh, M. E.; Shinar, J.

    2018-01-01

    We present a comparative theoretical study of magnetic resonance within the polaron pair recombination (PPR) and the triplet exciton-polaron quenching (TPQ) models. Both models have been invoked to interpret the photoluminescence detected magnetic resonance (PLDMR) results in π -conjugated materials and devices. We show that resonance line shapes calculated within the two models differ dramatically in several regards. First, in the PPR model, the line shape exhibits unusual behavior upon increasing the microwave power: it evolves from fully positive at weak power to fully negative at strong power. In contrast, in the TPQ model, the PLDMR is completely positive, showing a monotonic saturation. Second, the two models predict different dependencies of the resonance signal on the photoexcitation power, PL. At low PL, the resonance amplitude Δ I /I is ∝PL within the PPR model, while it is ∝PL2 crossing over to PL3 within the TPQ model. On the physical level, the differences stem from different underlying spin dynamics. Most prominently, a negative resonance within the PPR model has its origin in the microwave-induced spin-Dicke effect, leading to the resonant quenching of photoluminescence. The spin-Dicke effect results from the spin-selective recombination, leading to a highly correlated precession of the on-resonance pair partners under the strong microwave power. This effect is not relevant for TPQ mechanism, where the strong zero-field splitting renders the majority of triplets off resonance. On the technical level, the analytical evaluation of the line shapes for the two models is enabled by the fact that these shapes can be expressed via the eigenvalues of a complex Hamiltonian. This bypasses the necessity of solving the much larger complex linear system of the stochastic Liouville equations. Our findings pave the way towards a reliable discrimination between the two mechanisms via cw PLDMR.

  17. Modele bicamerale comparate. Romania: Monocameralism versus bicameralism

    Directory of Open Access Journals (Sweden)

    Cynthia Carmen CURT

    2007-06-01

    Full Text Available The paper attempts to evaluate the Romanian bicameral model as well as to identify and critically asses which are the options our country has in choosing between unicameral and bicameral system. The analysis attempts to observe the characteristics of some Second Chambers that are related to Romanian bicameralism by influencing the configuration of the Romanian bicameral legislature, or which devised constitutional mechanisms can be used in order to preserve an efficient bicameral formula. Also the alternative of giving up the bicameral formula due to some arguments related to the simplification and the efficiency of the legislative procedure is explored.

  18. BANK RATING. A COMPARATIVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Batrancea Ioan

    2015-07-01

    Full Text Available Banks in Romania offers its customers a wide range of products but which involves both risk taking. Therefore researchers seek to build rating models to help managers of banks to risk of non-recovery of loans and interest. In the following we highlight rating Raiffeisen Bank, BCR-ERSTE Bank and Transilvania Bank, based on the models CAAMPL and Stickney making a comparative analysis of the two rating models.

  19. Models comparative study for heat storage in fixed beds; Estudo comparativo de modelos para armazenamento de calor em leitos fixos

    Energy Technology Data Exchange (ETDEWEB)

    Stuginski, Junior, Rubens

    1991-07-01

    This work presents comparative results of a numerical investigation of four possible models for the prediction of thermal performance of fixed bed storage units and their thermal design. These models includes Schumann's model, the radial dispersion model, a model that include both axial heat conduction in the fluid phase and admits thermal gradient in the solids particles and finally a two dimensional single phase model. For each of these models a computer code was written and tested to evaluate the computing time of same data and analyze any other computational problems. The tests of thermal performance included particle size, porosity, particle material, flow rate, inlet temperature and heat losses form tank walls and extremities. Dynamics behaviour of the storage units due to transient variation in either flow rate or inlet temperature was also investigated. The results presented include temperature gradients, pressure drop and heat storage. The results obtained are very useful for analysis and design of fixed bed storage units. (author)

  20. Early small-bowel ischemia: dual-energy CT improves conspicuity compared with conventional CT in a swine model.

    Science.gov (United States)

    Potretzke, Theodora A; Brace, Christopher L; Lubner, Meghan G; Sampson, Lisa A; Willey, Bridgett J; Lee, Fred T

    2015-04-01

    To compare dual-energy computed tomography (CT) with conventional CT for the detection of small-bowel ischemia in an experimental animal model. The study was approved by the animal care and use committee and was performed in accordance with the Guide for Care and Use of Laboratory Animals issued by the National Research Council. Ischemic bowel segments (n = 8) were created in swine (n = 4) by means of surgical occlusion of distal mesenteric arteries and veins. Contrast material-enhanced dual-energy CT and conventional single-energy CT (120 kVp) sequences were performed during the portal venous phase with a single-source fast-switching dual-energy CT scanner. Attenuation values and contrast-to-noise ratios of ischemic and perfused segments on iodine material-density, monospectral dual-energy CT (51 keV, 65 keV, and 70 keV), and conventional 120-kVp CT images were compared. Linear mixed-effects models were used for comparisons. The attenuation difference between ischemic and perfused segments was significantly greater on dual-energy 51-keV CT images than on conventional 120-kVp CT images (mean difference, 91.7 HU vs 47.6 HU; P conventional CT by increasing attenuation differences between ischemic and perfused segments on low-kiloelectron volt and iodine material density images. © RSNA, 2014.

  1. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  2. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  3. COMPARATIVE EFFICIENCIES STUDY OF SLOT MODEL AND MOUSE MODEL IN PRESSURISED PIPE FLOW

    Directory of Open Access Journals (Sweden)

    Saroj K. Pandit

    2014-01-01

    Full Text Available The flow in sewers is unsteady and variable between free-surfac e to full pipe pressurized flow. Sewers are designed on the basis of free surf ace flow (gravity flow however they may carry pressurized flow. Preissmann Slot concep t is widely used numerical approach in unsteady free surface-pressurized flow as it provides the advantage of using free surface flow as a single type flow. Slo t concept uses the Saint- Venant’s equations as a basic equation for one-dimensional unst eady free surface flow. This paper includes two different numerical models using Saint Venant’s equations. The Saint Venant’s e quations of continuity and momen tum are solved by the Method of Characteristics and presented in forms for direct substitution into FORTRAN programming for numerical analysis in the first model. The MOUSE model carries out computation of unsteady flows which is founde d on an implicit, finite difference numerical solut ion of the basic one dimension al Saint Venant’s equations of free surface flow. The simulation results are comp ared to analyze the nature and degree of errors for further improvement.

  4. Comparative Study of Elastic Network Model and Protein Contact Network for Protein Complexes: The Hemoglobin Case

    Directory of Open Access Journals (Sweden)

    Guang Hu

    2017-01-01

    Full Text Available The overall topology and interfacial interactions play key roles in understanding structural and functional principles of protein complexes. Elastic Network Model (ENM and Protein Contact Network (PCN are two widely used methods for high throughput investigation of structures and interactions within protein complexes. In this work, the comparative analysis of ENM and PCN relative to hemoglobin (Hb was taken as case study. We examine four types of structural and dynamical paradigms, namely, conformational change between different states of Hbs, modular analysis, allosteric mechanisms studies, and interface characterization of an Hb. The comparative study shows that ENM has an advantage in studying dynamical properties and protein-protein interfaces, while PCN is better for describing protein structures quantitatively both from local and from global levels. We suggest that the integration of ENM and PCN would give a potential but powerful tool in structural systems biology.

  5. Comparative Analysis Of Three Largest World Models Of Business Excellence

    Directory of Open Access Journals (Sweden)

    Jasminka Samardžija

    2009-07-01

    Full Text Available Business excellence has become the strongest means of achieving competitive advantage of companies while total management of quality has become the road that ensures support of excellent results recognized by many world companies. Despite many differences, we can conclude that models have many common elements. By the audit in 2005, the DP and MBNQA moved the focus from excellence of product, i.e service, onto the excellence of quality of the entire organization process. Thus, the quality got strategic dimension instead of technical one and the accent passed from the technical quality on the total excellence of all organization processes. The joint movement goes to the direction of good management and appreciation of systems thinking. The very structure of EFOM model criteria itself is adjusted to strategic dimension of quality and that is why the model underwent only short audits within the criteria themselves. Essentially, the model remained unchanged. In all models, the accent is on the satisfaction of buyers, employees and community. National rewards for quality have an important role in promotion and giving a prize to excellence in organization performances. Moreover, they raise quality standards of companies and the country profile as a whole. Considering the GDP per capita and the percentage of certification level of companies, Croatia has all the predispositions for introduction the EFQM model of business excellence with the basic aim of deficit decrease in foreign trade balance and strengthening of competitiveness as the necessary preliminary work for the entrance in the competitive market of the EU. Quality management was introduced in many organizations. The methods used at that time developed in the course of years, and what are to predict is the continuation of the evolution road model as well as the method of business excellence.

  6. Excitant and depressant drugs modulate effects of environment on brain weight and cholinesterases

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, E.L.; Rosenzweig, M.R.; Wu, S.Y.C.

    1973-01-01

    Certain excitant drugs can enhance the effects of enriched experience on weights of brain sections and on the activities of acetylcholinesterase and cholinesterase in the brain, and certain depressants can lessen the brain weight effects. Most experiments were performed with prepubertal male rats. Some rats were exposed in groups of 12 to an enriched environmental condition (EC), usually for 2 h per day and over a 30-day period; others remained in their individual home cages (HC) throughout. Some received a drug injection and others received a saline injection before the daily EC period; HC controls received similar injections. The drug injections had no significant effects on brain values of HC rats, but they altered effects of EC, probably by influencing the animals' reactions to the environment. Methamphetamine and d-amphetamine enhanced the EC effects; metrazol had small positive effects; and strychnine was without effects. Phenobarbital depressed the brain weight effects but increased the enzymatic effects. Use of methamphetamine made it possible to find EC effects with short daily periods (30 min) or with a shortened experimental duration (15 days). In experiments with adult rats, methamphetamine did not modulate the brain weight effects. The results of this study may bear on the use of stimulants to promote recovery from brain damage.

  7. AHR-11797: a novel benzodiazepine antagonist

    International Nuclear Information System (INIS)

    Johnson, D.N.; Kilpatrick, B.F.; Hannaman, P.K.

    1986-01-01

    AHR-11797(5,6-dihydro-6-methyl-1-phenyl- 3 H-pyrrolo[3,2,1-ij]quinazolin-3-one) displaced 3 H-flunitrazepam (IC 50 = 82 nM) and 3 H-Ro 15-1877 (IC 50 = 104 nM) from rat brain synaptosomes. AHR-11797 did not protect mice from seizures induced by maximal electroshock or subcutaneous Metrazol (scMET), nor did it induce seizures in doses up to the lethal dose. However, at 31.6 mg/kg, IP, it significantly increased the anticonvulsant ED 50 of chlordiazepoxide (CDPX) from 1.9 to 31.6 mg/kg, IP. With 56.7 mg/kg, IP, of AHR-11797, CDPX was inactive in doses up to 100 mg/kg, IP. AHR-11797 did not significantly increase punished responding in the Geller and Seifter conflict procedure, but it did attenuate the effects of diazepam. Although the compound is without anticonvulsant or anxiolytic activity, it did have muscle relaxant properties. AHR-11797 blocked morphine-induced Straub tail in mice (ED 50 = 31 mg/kg, IP) and it selectively suppressed the polysnaptic linguomandibular reflex in barbiturate-anesthetized cats. The apparent muscle relaxant activity of AHR-11797 suggests that different receptor sites are involved for muscle relaxant vs. anxiolytic/anticonvulsant activities of the benzodiazepines

  8. Classifying and comparing spatial models of fire dynamics

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan

    2007-01-01

    Wildland fire is a significant disturbance in many ecosystems worldwide and the interaction of fire with climate and vegetation over long time spans has major effects on vegetation dynamics, ecosystem carbon budgets, and patterns of biodiversity. Landscape-Fire-Succession Models (LFSMs) that simulate the linked processes of fire and vegetation development in a spatial...

  9. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  10. Comparing Reasons for Quitting Substance Abuse with the Constructs of Behavioral Models: A Qualitative Study

    Directory of Open Access Journals (Sweden)

    Hamid Tavakoli Ghouchani

    2015-03-01

    Full Text Available Background and Objectives: The world population has reached over seven billion people. Of these, 230 million individuals abuse substances. Therefore, substance abuse prevention and treatment programs have received increasing attention during the past two decades. Understanding people’s motivations for quitting drug abuse is essential to the success of treatment. This study hence sought to identify major motivations for quitting and to compare them with the constructs of health education models. Materials and Methods: In the present study, qualitative content analysis was used to determine the main motivations for quitting substance abuse. Overall, 22 patients, physicians, and psychotherapists were selected from several addiction treatment clinics in Bojnord (Iran during 2014. Purposeful sampling method was applied and continued until data saturation was achieved. Data were collected through semi-structured, face-to-face interviews and field notes. All interviews were recorded and transcribed. Results: Content analysis revealed 33 sub-categories and nine categories including economic problems, drug-related concerns, individual problems, family and social problems, family expectations, attention to social status, beliefs about drug addiction, and valuing the quitting behavior. Accordingly, four themes, i.e. perceived threat, perceived barriers, attitude toward the behavior, and subjective norms, were extracted. Conclusion: Reasons for quitting substance abuse match the constructs of different behavioral models (e.g. the health belief model and the theory of planned behavior.

  11. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  12. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models.

    Science.gov (United States)

    Gautestad, Arild O; Loe, Leif E; Mysterud, Atle

    2013-05-01

    1. Increased inference regarding underlying behavioural mechanisms of animal movement can be achieved by comparing GPS data with statistical mechanical movement models such as random walk and Lévy walk with known underlying behaviour and statistical properties. 2. GPS data are typically collected with ≥ 1 h intervals not exactly tracking every mechanistic step along the movement path, so a statistical mechanical model approach rather than a mechanistic approach is appropriate. However, comparisons require a coherent framework involving both scaling and memory aspects of the underlying process. Thus, simulation models have recently been extended to include memory-guided returns to previously visited patches, that is, site fidelity. 3. We define four main classes of movement, differing in incorporation of memory and scaling (based on respective intervals of the statistical fractal dimension D and presence/absence of site fidelity). Using three statistical protocols to estimate D and site fidelity, we compare these main movement classes with patterns observed in GPS data from 52 females of red deer (Cervus elaphus). 4. The results show best compliance with a scale-free and memory-enhanced kind of space use; that is, a power law distribution of step lengths, a fractal distribution of the spatial scatter of fixes and site fidelity. 5. Our study thus demonstrates how inference regarding memory effects and a hierarchical pattern of space use can be derived from analysis of GPS data. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  13. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  14. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  15. Noise model for serrated trailing edges compared to wind tunnel measurements

    DEFF Research Database (Denmark)

    Fischer, Andreas; Bertagnolio, Franck; Shen, Wen Zhong

    2016-01-01

    A new CFD RANS based method to predict the far field sound pressure emitted from an aerofoil with serrated trailing edge has been developed. The model was validated by comparison to measurements conducted in the Virginia Tech Stability Wind Tunnel. The model predicted 3 dB lower sound pressure...... levels, but the tendencies for the different configurations were predicted correctly. Therefore the model can be used to optimise the serration geometry. A disadvantage of the new model is that the computational costs are significantly higher than for the Amiet model for a straight trailing edge. However...

  16. Comparative evaluation of two models of UPQC for suitable interface to enhance power quality

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Malabika [Department of Electrical Engineering, Dublin Institute of Technology, Kevin Street, Dublin 8 (Ireland); Das, Shyama P.; Dubey, Gopal K. [Department of Electrical Engineering, Indian Institute of Technology, Kanpur (India)

    2007-05-15

    Majority of the dispersed generations from renewable energy sources are connected to the grid through power electronic interface, which introduce additional harmonics in the distribution systems. Research is being carried out to integrate active filtering with specific interface such that a common power quality (PQ) platform could be achieved. For generalized solution, a unified power quality conditioner (UPQC) could be the most comprehensive PQ protecting device for sensitive non-linear loads, which require quality input supply. Also, load current harmonic isolation needs to be ensured for maintaining the quality of the supply current. The present paper describes two control scheme models for UPQC, for enhancing PQ of sensitive non-linear loads. Based on two different kinds of voltage compensation strategy, two control schemes have been designed, which are termed as UPQC-Q and UPQC-P. A comparative loading analysis has developed useful insight in finding the typical application of the two different control schemes. The effectiveness of the two control schemes is verified through extensive simulation using the software SABER. As the power circuit configuration of UPQC remains same for both the model, with modification of control scheme only, the utility of UPQC can be optimized depending upon the application requirement. (author)

  17. TDHF-motivated macroscopic model for heavy ion collisions: a comparative study

    International Nuclear Information System (INIS)

    Biedermann, M.; Reif, R.; Maedler, P.

    1984-01-01

    A detailed investigation of Bertshc's classical TDHF-motivated model for the description of heavy ion collisions is performed. The model agrees well with TDHF and phenomenological models which include deformation degrees of freedom as well as with experimental data. Some quantitative deviations from experiment and/or TDHF can be removed to a large extent if the standard model parameters are considered as adjustable parameters in physically reasonable regions of variation

  18. Hepatic differentiation of human iPSCs in different 3D models: A comparative study.

    Science.gov (United States)

    Meier, Florian; Freyer, Nora; Brzeszczynska, Joanna; Knöspel, Fanny; Armstrong, Lyle; Lako, Majlinda; Greuel, Selina; Damm, Georg; Ludwig-Schwellinger, Eva; Deschl, Ulrich; Ross, James A; Beilmann, Mario; Zeilinger, Katrin

    2017-12-01

    Human induced pluripotent stem cells (hiPSCs) are a promising source from which to derive distinct somatic cell types for in vitro or clinical use. Existent protocols for hepatic differentiation of hiPSCs are primarily based on 2D cultivation of the cells. In the present study, the authors investigated the generation of hiPSC-derived hepatocyte-like cells using two different 3D culture systems: A 3D scaffold-free microspheroid culture system and a 3D hollow-fiber perfusion bioreactor. The differentiation outcome in these 3D systems was compared with that in conventional 2D cultures, using primary human hepatocytes as a control. The evaluation was made based on specific mRNA expression, protein secretion, antigen expression and metabolic activity. The expression of α-fetoprotein was lower, while cytochrome P450 1A2 or 3A4 activities were higher in the 3D culture systems as compared with the 2D differentiation system. Cells differentiated in the 3D bioreactor showed an increased expression of albumin and hepatocyte nuclear factor 4α, as well as secretion of α-1-antitrypsin as compared with the 2D differentiation system, suggesting a higher degree of maturation. In contrast, the 3D scaffold-free microspheroid culture provides an easy and robust method to generate spheroids of a defined size for screening applications, while the bioreactor culture model provides an instrument for complex investigations under physiological-like conditions. In conclusion, the present study introduces two 3D culture systems for stem cell derived hepatic differentiation each demonstrating advantages for individual applications as well as benefits in comparison with 2D cultures.

  19. Model based population PK-PD analysis of furosemide for BP lowering effect: A comparative study in primary and secondary hypertension.

    Science.gov (United States)

    Shukla, Mahendra; Ibrahim, Moustafa M A; Jain, Moon; Jaiswal, Swati; Sharma, Abhisheak; Hanif, Kashif; Lal, Jawahar

    2017-11-15

    Though numerous reports have demonstrated multiple mechanisms by which furosemide can exert its anti-hypertensive response. However, lack of studies describing PK-PD relationship for furosemide featuring its anti-hypertensive property has limited its usage as a blood pressure (BP) lowering agent. Serum concentrations and mean arterial BP were monitored following 40 and 80mgkg -1 multiple oral dose of furosemide in spontaneously hypertensive rats (SHR) and DOCA-salt induced hypertensive (DOCA-salt) rats. A simultaneous population PK-PD relationship using E max model with effect compartment was developed to compare the anti-hypertensive efficacy of furosemide in these rat models. A two-compartment PK model with Weibull-type absorption and first-order elimination best described the serum concentration-time profile of furosemide. In the present study, post dose serum concentrations of furosemide were found to be lower than the EC 50 . The EC 50 predicted in DOCA-salt rats was found to be lower (4.5-fold), whereas the tolerance development was higher than that in SHR model. The PK-PD parameter estimates, particularly lower values of EC 50 , K e and Q in DOCA-salt rats as compared to SHR, pinpointed the higher BP lowering efficacy of furosemide in volume overload induced hypertensive conditions. Insignificantly altered serum creatinine and electrolyte levels indicated a favorable side effect profile of furosemide. In conclusion, the final PK-PD model described the data well and provides detailed insights into the use of furosemide as an anti-hypertensive agent. Copyright © 2017. Published by Elsevier B.V.

  20. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.