WorldWideScience

Sample records for comparative model analysis

  1. comparative analysis of some existing kinetic models with proposed

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.

  2. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  3. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  4. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  5. Comparative dynamic analysis of the full Grossman model.

    Science.gov (United States)

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  6. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  7. COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS

    OpenAIRE

    Sandeep Kaur*

    2017-01-01

    No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need

  8. Comparative analysis of existing models for power-grid synchronization

    International Nuclear Information System (INIS)

    Nishikawa, Takashi; Motter, Adilson E

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations. (paper)

  9. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  10. Wellness Model of Supervision: A Comparative Analysis

    Science.gov (United States)

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  11. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  12. Comparative Analysis of Photogrammetric Methods for 3D Models for Museums

    DEFF Research Database (Denmark)

    Hafstað Ármannsdottir, Unnur Erla; Antón Castro, Francesc/François; Mioc, Darka

    2014-01-01

    The goal of this paper is to make a comparative analysis and selection of methodologies for making 3D models of historical items, buildings and cultural heritage and how to preserve information such as temporary exhibitions and archaeological findings. Two of the methodologies analyzed correspond...... matrix has been used. Prototypes are made partly or fully and evaluated from the point of view of preservation of information by a museum....

  13. Comparative analysis of coupled creep-damage model implementations and application

    International Nuclear Information System (INIS)

    Bhandari, S.; Feral, X.; Bergheau, J.M.; Mottet, G.; Dupas, P.; Nicolas, L.

    1998-01-01

    Creep rupture of a reactor pressure vessel in a severe accident occurs after complex load and temperature histories leading to interactions between creep deformations, stress relaxation, material damaging and plastic instability. The concepts of continuous damage introduced by Kachanov and Robotnov allow to formulate models coupling elasto-visco-plasticity and damage. However, the integration of such models in a finite element code creates some difficulties related to the strong non-linearity of the constitutive equations. It was feared that different methods of implementation of such a model might lead to different results which, consequently, might limit the application and usefulness of such a model. The Commissariat a l'Energie Atomique (CEA), Electricite de France (EDF) and Framasoft (FRA) have worked out numerical solutions to implement such a model in respectively CASTEM 2000, ASTER and SYSTUS codes. A ''benchmark'' was set up, chosen on the basis of a cylinder studied in the programme ''RUPTHER''. The aim of this paper is not to enter into the numerical details of the implementation of the model, but to present the results of the comparative study made using the three codes mentioned above, on a case of engineering interest. The results of the coupled model will also be compared to an uncoupled model to evaluate differences one can obtain between a simple uncoupled model and a more sophisticated coupled model. The main conclusion drawn from this study is that the different numerical implementations used for the coupled damage-visco-plasticity model give quite consistent results. The numerical difficulty inherent to the integration of the strongly non-linear constitutive equations have been resolved using Runge-Kutta or mid-point rule. The usefulness of the coupled model comes from the fact the uncoupled model leads to too conservative results, at least in the example treated and in particular for the uncoupled analysis under the hypothesis of the small

  14. A comparative study of turbulence models for dissolved air flotation flow analysis

    International Nuclear Information System (INIS)

    Park, Min A; Lee, Kyun Ho; Chung, Jae Dong; Seo, Seung Ho

    2015-01-01

    The dissolved air flotation (DAF) system is a water treatment process that removes contaminants by attaching micro bubbles to them, causing them to float to the water surface. In the present study, two-phase flow of air-water mixture is simulated to investigate changes in the internal flow analysis of DAF systems caused by using different turbulence models. Internal micro bubble distribution, velocity, and computation time are compared between several turbulence models for a given DAF geometry and condition. As a result, it is observed that the standard κ-ε model, which has been frequently used in previous research, predicts somewhat different behavior than other turbulence models

  15. Comparative analysis of Bouc–Wen and Jiles–Atherton models under symmetric excitations

    Energy Technology Data Exchange (ETDEWEB)

    Laudani, Antonino, E-mail: alaudani@uniroma3.it; Fulginei, Francesco Riganti; Salvini, Alessandro

    2014-02-15

    The aim of the present paper is to validate the Bouc–Wen (BW) hysteresis model when it is applied to predict dynamic ferromagnetic loops. Indeed, although the Bouc–Wen model has had an increasing interest in last few years, it is usually adopted in mechanical and structural systems and very rarely for magnetic applications. Thus, for addressing this goal the Bouc–Wen model is compared with the dynamic Jiles–Atherton model that, instead, was ideated exactly for simulating magnetic hysteresis. The comparative analysis has involved saturated and symmetric hysteresis loops in ferromagnetic materials. In addition in order to identify the Bouc–Wen parameters a very effective recent heuristic, called Metric-Topological and Evolutionary Optimization (MeTEO) has been utilized. It is based on a hybridization of three meta-heuristics: the Flock-of-Starlings Optimization, the Particle Swarm Optimization and the Bacterial Chemotaxis Algorithm. Thanks to the specific properties of these heuristic, MeTEO allow us to achieve effective identification of such kind of models. Several hysteresis loops have been utilized for final validation tests with the aim to investigate if the BW model can follow the different hysteresis behaviors of both static (quasi-static) and dynamic cases.

  16. A comparative analysis of reactor lower head debris cooling models employed in the existing severe accident analysis codes

    International Nuclear Information System (INIS)

    Ahn, K.I.; Kim, D.H.; Kim, S.B.; Kim, H.D.

    1998-08-01

    MELCOR and MAAP4 are the representative severe accident analysis codes which have been developed for the integral analysis of the phenomenological reactor lower head corium cooling behavior. Main objectives of the present study is to identify merits and disadvantages of each relevant model through the comparative analysis of the lower plenum corium cooling models employed in these two codes. The final results will be utilized for the development of LILAC phenomenological models and for the continuous improvement of the existing MELCOR reactor lower head models, which are currently being performed at the KAERI. For these purposes, first, nine reference models are selected featuring the lower head corium behavior based on the existing experimental evidences and related models. Then main features of the selected models have been critically analyzed, and finally merits and disadvantages of each corresponding model have been summarized in the view point of realistic corium behavior and reasonable modeling. Being on these evidences, summarized and presented the potential improvements for developing more advanced models. The present study has been focused on the qualitative comparison of each model and so more detailed quantitative analysis is strongly required to obtain the final conclusions for their merits and disadvantages. In addition, in order to compensate the limitations of the current model, required further studies relating closely the detailed mechanistic models with the molten material movement and heat transfer based on phase-change in the porous medium, to the existing simple models. (author). 36 refs

  17. Comparative Proteomic Analysis of Two Uveitis Models in Lewis Rats.

    Science.gov (United States)

    Pepple, Kathryn L; Rotkis, Lauren; Wilson, Leslie; Sandt, Angela; Van Gelder, Russell N

    2015-12-01

    Inflammation generates changes in the protein constituents of the aqueous humor. Proteins that change in multiple models of uveitis may be good biomarkers of disease or targets for therapeutic intervention. The present study was conducted to identify differentially-expressed proteins in the inflamed aqueous humor. Two models of uveitis were induced in Lewis rats: experimental autoimmune uveitis (EAU) and primed mycobacterial uveitis (PMU). Differential gel electrophoresis was used to compare naïve and inflamed aqueous humor. Differentially-expressed proteins were separated by using 2-D gel electrophoresis and excised for identification with matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF). Expression of select proteins was verified by Western blot analysis in both the aqueous and vitreous. The inflamed aqueous from both models demonstrated an increase in total protein concentration when compared to naïve aqueous. Calprotectin, a heterodimer of S100A8 and S100A9, was increased in the aqueous in both PMU and EAU. In the vitreous, S100A8 and S100A9 were preferentially elevated in PMU. Apolipoprotein E was elevated in the aqueous of both uveitis models but was preferentially elevated in EAU. Beta-B2-crystallin levels decreased in the aqueous and vitreous of EAU but not PMU. The proinflammatory molecules S100A8 and S100A9 were elevated in both models of uveitis but may play a more significant role in PMU than EAU. The neuroprotective protein β-B2-crystallin was found to decline in EAU. Therapies to modulate these proteins in vivo may be good targets in the treatment of ocular inflammation.

  18. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  19. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  20. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    Science.gov (United States)

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    Science.gov (United States)

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  3. MODELLING OF FINANCIAL EFFECTIVENESS AND COMPARATIVE ANALYSIS OF PUBLIC-PRIVATE PARTNERSHIP PROJECTS AND PUBLIC PROCUREMENT

    Directory of Open Access Journals (Sweden)

    Kuznetsov Aleksey Alekseevich

    2017-10-01

    following methods were applied in the research: general scientific theoretical methods, such as idealization and formalization; general scientific experimental methods, such as modelling of objects under study. Results: the main result of this research is development of modelling for comparative analysis of PPP projects and traditional public procurement. This enables us to estimate the “price-quality” ratio and, based on it, analyze critical parameters of the project. Conclusions: despite the objective complexity of the analysis of effectiveness of PPP projects both individually and especially when compared with traditional forms of public procurement, there is a relatively simple way to perform such an analysis by means of modelling and comparison of cash flows of all parties of the PPP process. Proposed model makes the criteria for PPP projects effectiveness more intuitive for interpretation while preserving all key parameters. The relative simplicity of the analysis, calculation of limiting values, and interpretation of the results make the model a useful practical tool both for bodies of the public sector and potential private investors.

  4. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  5. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  6. Comparative risk analysis

    International Nuclear Information System (INIS)

    Niehaus, F.

    1988-01-01

    In this paper, the risks of various energy systems are discussed considering severe accidents analysis, particularly the probabilistic safety analysis, and probabilistic safety criteria, and the applications of these criteria and analysis. The comparative risk analysis has demonstrated that the largest source of risk in every society is from daily small accidents. Nevertheless, we have to be more concerned about severe accidents. The comparative risk analysis of five different energy systems (coal, oil, gas, LWR and STEC (Solar)) for the public has shown that the main sources of risks are coal and oil. The latest comparative risk study of various energy has been conducted in the USA and has revealed that the number of victims from coal is 42 as many than victims from nuclear. A study for severe accidents from hydro-dams in United States has estimated the probability of dam failures at 1 in 10,000 years and the number of victims between 11,000 and 260,000. The average occupational risk from coal is one fatal accident in 1,000 workers/year. The probabilistic safety analysis is a method that can be used to assess nuclear energy risks, and to analyze the severe accidents, and to model all possible accident sequences and consequences. The 'Fault tree' analysis is used to know the probability of failure of the different systems at each point of accident sequences and to calculate the probability of risks. After calculating the probability of failure, the criteria for judging the numerical results have to be developed, that is the quantitative and qualitative goals. To achieve these goals, several systems have been devised by various countries members of AIEA. The probabilistic safety ana-lysis method has been developed by establishing a computer program permit-ting to know different categories of safety related information. 19 tabs. (author)

  7. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  8. Comparative analysis of methods and tools for open and closed fuel cycles modeling: MESSAGE and DESAE

    International Nuclear Information System (INIS)

    Andrianov, A.A.; Korovin, Yu.A.; Murogov, V.M.; Fedorova, E.V.; Fesenko, G.A.

    2006-01-01

    Comparative analysis of optimization and simulation methods by the example of MESSAGE and DESAE programs is carried out for nuclear power prospects and advanced fuel cycles modeling. Test calculations for open and two-component nuclear power and closed fuel cycle are performed. Auxiliary simulation-dynamic model is developed to specify MESSAGE and DESAE modeling approaches difference. The model description is given [ru

  9. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Charalambos Koukouvaos

    2014-01-01

    Full Text Available Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  10. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  11. BANK RATING. A COMPARATIVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Batrancea Ioan

    2015-07-01

    Full Text Available Banks in Romania offers its customers a wide range of products but which involves both risk taking. Therefore researchers seek to build rating models to help managers of banks to risk of non-recovery of loans and interest. In the following we highlight rating Raiffeisen Bank, BCR-ERSTE Bank and Transilvania Bank, based on the models CAAMPL and Stickney making a comparative analysis of the two rating models.

  12. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    Science.gov (United States)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  13. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  14. Comparative analysis between Hec-RAS models and IBER in the hydraulic assessment of bridges

    OpenAIRE

    Rincón, Jean; Pérez, María; Delfín, Guillermo; Freitez, Carlos; Martínez, Fabiana

    2017-01-01

    This work aims to perform a comparative analysis between the Hec-RAS and IBER models, in the hydraulic evaluation of rivers with structures such as bridges. The case of application was the La Guardia creek, located in the road that communicates the cities of Barquisimeto-Quíbor, Venezuela. The first phase of the study consisted in the comparison of the models from the conceptual point of view and the management of both. The second phase focused on the case study, and the comparison of ...

  15. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  16. Comparative analysis of elements and models of implementation in local-level spatial plans in Serbia

    Directory of Open Access Journals (Sweden)

    Stefanović Nebojša

    2017-01-01

    Full Text Available Implementation of local-level spatial plans is of paramount importance to the development of the local community. This paper aims to demonstrate the importance of and offer further directions for research into the implementation of spatial plans by presenting the results of a study on models of implementation. The paper describes the basic theoretical postulates of a model for implementing spatial plans. A comparative analysis of the application of elements and models of implementation of plans in practice was conducted based on the spatial plans for the local municipalities of Arilje, Lazarevac and Sremska Mitrovica. The analysis includes four models of implementation: the strategy and policy of spatial development; spatial protection; the implementation of planning solutions of a technical nature; and the implementation of rules of use, arrangement and construction of spaces. The main results of the analysis are presented and used to give recommendations for improving the elements and models of implementation. Final deliberations show that models of implementation are generally used in practice and combined in spatial plans. Based on the analysis of how models of implementation are applied in practice, a general conclusion concerning the complex character of the local level of planning is presented and elaborated. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR 36035: Spatial, Environmental, Energy and Social Aspects of Developing Settlements and Climate Change - Mutual Impacts and Grant no. III 47014: The Role and Implementation of the National Spatial Plan and Regional Development Documents in Renewal of Strategic Research, Thinking and Governance in Serbia

  17. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  18. Comparative analysis of Carnaval II Library

    International Nuclear Information System (INIS)

    Santos Bastos, W. dos

    1981-01-01

    The Carnaval II cross sections library from the french fast reactor calculation system is evaluated in two ways: 1 0 ) a comparative analysis of the calculations system for fast reactors at IEN (Instituto de Engenharia Nuclear) using a 'benchmark' model is done; 2 0 ) a comparative analysis in relation to the french system itself is also done, using calculations realized with two versions of the french library: the SETR-II and the CARNAVAL IV, the first one being anterior and the second one posterior to the Carnaval II version, the one used by IEN. (Author) [pt

  19. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  20. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    Science.gov (United States)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  1. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2014-02-01

    Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.

  2. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  3. Comparative study of computational model for pipe whip analysis

    International Nuclear Information System (INIS)

    Koh, Sugoong; Lee, Young-Shin

    1993-01-01

    Many types of pipe whip restraints are installed to protect the structural components from the anticipated pipe whip phenomena of high energy lines in nuclear power plants. It is necessary to investigate these phenomena accurately in order to evaluate the acceptability of the pipe whip restraint design. Various research programs have been conducted in many countries to develop analytical methods and to verify the validity of the methods. In this study, various calculational models in ANSYS code and in ADLPIPE code, the general purpose finite element computer programs, were used to simulate the postulated pipe whips to obtain impact loads and the calculated results were compared with the specific experimental results from the sample pipe whip test for the U-shaped pipe whip restraints. Some calculational models, having the spring element between the pipe whip restraint and the pipe line, give reasonably good transient responses of the restraint forces compared with the experimental results, and could be useful in evaluating the acceptability of the pipe whip restraint design. (author)

  4. A comparative investigation of 18F kinetics in receptors: a compartment model analysis

    International Nuclear Information System (INIS)

    Tiwari, Anjani K.; Swatantra; Kaushik, A.; Mishra, A.K.

    2010-01-01

    Full text: Some authors reported that 18 F kinetics might be useful for evaluation of neuro receptors. We hypothesized that 18 F kinetics may show some information about neuronal damage, and each rate constant might have statistically significant correlation with WO function. The purpose of this study was to investigate 99m Tc MIBI kinetics through a compartment model analysis. Each rate constant from compartment analysis was compared with WO, T1/2, and (H/M) ratio in early and delayed phase. Different animal model were studied. After an injection the dynamic planar imaging was performed on a dual-headed digital gamma camera system for 30 minutes. An ROI was drawn manually to assess the global kinetics of 18 F. By using the time-activity curve (TAC) of ROI as a response tissue function and the TAC of Aorta as an input function, we analysed 18 F pharmacokinetics through a 2-compartment model. We defined k1 as influx rate constant, k2 as out flux rate constant and k3 as specific uptake rate constant. And we calculated k1/k2 as distribution volume (Vd), k1k3/k2 as specific uptake (SU), and k1k3/(k2+k3) as clearance. For non-competitive affinity studies of PET two modelling parameters distribution volume (DV) and Bmax / Kd are also calculated. Results: Statistically significant correlations were seen between k2 and T1/2 (P 18 F at the injection had relation to the uptake of it at 30 minutes and 2 hours after the injection. Furthermore, some indexes had statistically significant correlation with DV and Bmax. These compartment model approaches may be useful to estimate the other related studies

  5. Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)

    2012-07-01

    An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)

  6. Comparative empirical analysis of flow-weighted transit route networks in R-space and evolution modeling

    Science.gov (United States)

    Huang, Ailing; Zang, Guangzhi; He, Zhengbing; Guan, Wei

    2017-05-01

    Urban public transit system is a typical mixed complex network with dynamic flow, and its evolution should be a process coupling topological structure with flow dynamics, which has received little attention. This paper presents the R-space to make a comparative empirical analysis on Beijing’s flow-weighted transit route network (TRN) and we found that both the Beijing’s TRNs in the year of 2011 and 2015 exhibit the scale-free properties. As such, we propose an evolution model driven by flow to simulate the development of TRNs with consideration of the passengers’ dynamical behaviors triggered by topological change. The model simulates that the evolution of TRN is an iterative process. At each time step, a certain number of new routes are generated driven by travel demands, which leads to dynamical evolution of new routes’ flow and triggers perturbation in nearby routes that will further impact the next round of opening new routes. We present the theoretical analysis based on the mean-field theory, as well as the numerical simulation for this model. The results obtained agree well with our empirical analysis results, which indicate that our model can simulate the TRN evolution with scale-free properties for distributions of node’s strength and degree. The purpose of this paper is to illustrate the global evolutional mechanism of transit network that will be used to exploit planning and design strategies for real TRNs.

  7. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  8. The importance of human cognitive models in the safety analysis report of nuclear power plants - a comparative review

    International Nuclear Information System (INIS)

    Alvarenga, Marco A.B.; Araujo Goes, Alexandre G. de

    1997-01-01

    The chapter 18 of the Brazilian NPPs Safety Analysis Report (SAR) deals with Human Factor Engineering (HFE). The chapter evaluation is distributed among ten topics. One of them, the HRA (Human Reliability Analysis) becomes the central subject of the whole analysis, generating information to the other topics, as for example, high risk operational critical sequences. The HRA methods used in the past concerned the approach of modeling the human being as a component (hardware), based in a failure or success bivalent logic. In the last ten years, several human cognitive models were developed to be used in the nuclear field as well as in the conventional industry, mainly in the military aviation. In this paper, we describe their main features, comparing some models to each other, with the main purpose of determining the minimal characteristics acceptable for NPPs licensing, being part of these cognitive models, to be used mainly in the evaluation of HRAs from SARs in the NPPs. (author). 10 refs

  9. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  10. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  11. Multi-criteria comparative evaluation of spallation reaction models

    Science.gov (United States)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  12. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-01-01

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin’s lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (V x (%)), a two-variable NTCP model including V 30 (%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (V x (cc)) was analyzed, an NTCP model based on 3 variables including V 30 (cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760–0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an

  13. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  14. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  15. Comparative analysis of detection methods for congenital cytomegalovirus infection in a Guinea pig model.

    Science.gov (United States)

    Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R

    2013-01-01

    To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.

  16. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  17. Underwater floating robot-fish: a comparative analysis of the results of mathematical modelling and full-scale tests of the prototype

    Directory of Open Access Journals (Sweden)

    Jatsun Sergey

    2017-01-01

    Full Text Available The article presents a comparative analysis of the results of computer mathematical modelling of the motion of the underwater robot-fish implemented by using the MATLAB / Simulink package and fullscale tests of an experimental model developed in the laboratory of mechatronics and robotics of the SouthWest State University.

  18. Comparative analysis of the planar capacitor and IDT piezoelectric thin-film micro-actuator models

    International Nuclear Information System (INIS)

    Myers, Oliver J; Anjanappa, M; Freidhoff, Carl B

    2011-01-01

    A comparison of the analysis of similarly developed microactuators is presented. Accurate modeling and simulation techniques are vital for piezoelectrically actuated microactuators. Coupling analytical and numerical modeling techniques with variational design parameters, accurate performance predictions can be realized. Axi-symmetric two-dimensional and three-dimensional static deflection and harmonic models of a planar capacitor actuator are presented. Planar capacitor samples were modeled as unimorph diaphragms with sandwiched piezoelectric material. The harmonic frequencies were calculated numerically and compared well to predicted values and deformations. The finite element modeling reflects the impact of the d 31 piezoelectric constant. Two-dimensional axi-symmetric models of circularly interdigitated piezoelectrically membranes are also presented. The models include the piezoelectric material and properties, the membrane materials and properties, and incorporates various design considerations of the model. These models also include the electro-mechanical coupling for piezoelectric actuation and highlight a novel approach to take advantage of the higher d 33 piezoelectric coupling coefficient. Performance is evaluated for varying parameters such as electrode pitch, electrode width, and piezoelectric material thickness. The models also showed that several of the design parameters were naturally coupled. The static numerical models correlate well with the maximum static deflection of the experimental devices. Finally, this paper deals with the development of numerical harmonic models of piezoelectrically actuated planar capacitor and interdigitated diaphragms. The models were able to closely predict the first two harmonics, conservatively predict the third through sixth harmonics and predict the estimated values of center deflection using plate theory. Harmonic frequency and deflection simulations need further correlation by conducting extensive iterative

  19. Comparative Modelling of the Spectra of Cool Giants

    Science.gov (United States)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  20. Comparative and Evolutionary Analysis of Grass Pollen Allergens Using Brachypodium distachyon as a Model System.

    Directory of Open Access Journals (Sweden)

    Akanksha Sharma

    Full Text Available Comparative genomics have facilitated the mining of biological information from a genome sequence, through the detection of similarities and differences with genomes of closely or more distantly related species. By using such comparative approaches, knowledge can be transferred from the model to non-model organisms and insights can be gained in the structural and evolutionary patterns of specific genes. In the absence of sequenced genomes for allergenic grasses, this study was aimed at understanding the structure, organisation and expression profiles of grass pollen allergens using the genomic data from Brachypodium distachyon as it is phylogenetically related to the allergenic grasses. Combining genomic data with the anther RNA-Seq dataset revealed 24 pollen allergen genes belonging to eight allergen groups mapping on the five chromosomes in B. distachyon. High levels of anther-specific expression profiles were observed for the 24 identified putative allergen-encoding genes in Brachypodium. The genomic evidence suggests that gene encoding the group 5 allergen, the most potent trigger of hay fever and allergic asthma originated as a pollen specific orphan gene in a common grass ancestor of Brachypodium and Triticiae clades. Gene structure analysis showed that the putative allergen-encoding genes in Brachypodium either lack or contain reduced number of introns. Promoter analysis of the identified Brachypodium genes revealed the presence of specific cis-regulatory sequences likely responsible for high anther/pollen-specific expression. With the identification of putative allergen-encoding genes in Brachypodium, this study has also described some important plant gene families (e.g. expansin superfamily, EF-Hand family, profilins etc for the first time in the model plant Brachypodium. Altogether, the present study provides new insights into structural characterization and evolution of pollen allergens and will further serve as a base for their

  1. Comparative Analysis of Sectoral Innovation System and Diamond Model: The Case of Telecom Sector of Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Rezazadeh Mehrizi

    2008-08-01

    Full Text Available Porter’s model of Competitive advantage of nations (named as Diamond Model has been widely used and criticized as well, over recent two decades. On the other hand, non-mainstream economists have tried to propose new frameworks for industrial analysis, that among them, Sectoral Innovation System (SIS is one of the most influential ones. After proposing an assessment framework, we use this framework to compare SIS and Porter’s models and apply them to the case of second mobile operator in Iran. Briefly, SIS model sheds light on the innovation process and competence building and focuses on system failures that are of special importance in the context of developing countries, while Diamond model has the advantage of brining the production process and the influential role of government into focus, but each one has its own shortcomings for analyzing industrial development in developing countries and both of them fail to pay enough attention to foreign relations and international linkages.

  2. A comparative analysis of several vehicle emission models for road freight transportation

    NARCIS (Netherlands)

    Demir, E.; Bektas, T.; Laporte, G.

    2011-01-01

    Reducing greenhouse gas emissions in freight transportation requires using appropriate emission models in the planning process. This paper reviews and numerically compares several available freight transportation vehicle emission models and also considers their outputs in relations to field studies.

  3. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  4. Comparative analysis of stress in a new proposal of dental implants.

    Science.gov (United States)

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  6. Transitional processes: Territorial organization of authorities and the future constitution of Serbia comparative analysis of five constitutional models

    Directory of Open Access Journals (Sweden)

    Despotović Ljubiša M.

    2004-01-01

    Full Text Available In this paper the authors give a comparative analysis of territorial organization of authorities in five constitutional models for Serbia. The paper consists of the following chapters: Introduction, Outline of the Constitution of Kingdom of Serbia, Basic Principles of the New Constitution of Serbia - DSS, Outline of Constitution of Republic of Serbia - DS Constitutional Solutions for Serbia - BCLJP, Project of Constitution of Republic of Serbia - Forum iuris, Conclusion. The analysis of territorial organization of authorities has been seen in the context of the processes of transition and archiving the important principles of civil society and civil autonomies.

  7. Comparative analysis of Klafki and Heimann's didactic models

    Directory of Open Access Journals (Sweden)

    Bojović Žana P.

    2016-01-01

    Full Text Available A comparative analysis of Klafki's didactic thinking which is based on an analysis of different kinds of theories on the nature of education and Heimann's didactic which is based on the theory of teaching and learning shows that both are dealing with teaching in its entirety. Both authors emphasize the role of contents, methods, procedures and resources for material and formal education and both use anthropological and social reality as their starting point. According to Klafki, resources, procedures, and methods are in form of dependency where it is important to know what and why should something be learnt, whereas Heimann sees the same elements in the form of interdependency. Each of the didactic conceptions, from their point of view, define the position of goals and tasks in education as well as how to achieve them. Determination and formulation of objectives is a complex, responsible, and very difficult task, and a goal must be clearly defined, because it emanates the guidelines for the preparation of didactic methodology educational programs and their planning. The selection of content in didactic methodology scenarios of education and learning, are only possible if the knowledge, skills and abilities that are necessary for a student to develop are explicitly indicated. The question of educational goals is the main problem of didactics for only a clearly defined objective implicates the selection of appropriate methods and means for its achievement, and it should be a permanent task of the current didactic conception now and in the future.

  8. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  9. Eliciting mixed emotions: A meta-analysis comparing models, types and measures.

    Directory of Open Access Journals (Sweden)

    Raul eBerrios

    2015-04-01

    Full Text Available The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model – dimensional or discrete – as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative. The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = .77, which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  10. Case Problems for Problem-Based Pedagogical Approaches: A Comparative Analysis

    Science.gov (United States)

    Dabbagh, Nada; Dass, Susan

    2013-01-01

    A comparative analysis of 51 case problems used in five problem-based pedagogical models was conducted to examine whether there are differences in their characteristics and the implications of such differences on the selection and generation of ill-structured case problems. The five pedagogical models were: situated learning, goal-based scenario,…

  11. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  12. An experimental-numerical method for comparative analysis of joint prosthesis

    International Nuclear Information System (INIS)

    Claramunt, R.; Rincon, E.; Zubizarreta, V.; Ros, A.

    2001-01-01

    The difficulty that exists in the analysis of mechanical stresses in bones is high due to its complex mechanical and morphological characteristics. This complexity makes generalists modelling and conclusions derived from prototype tests very questionable. In this article a relatively simple comparative analysis systematic method that allow us to establish some behaviour differences in different kind of prosthesis is presented. The method, applicable in principle to any joint problem, is based on analysing perturbations produced in natural stress states of a bone after insertion of a joint prosthesis and combines numerical analysis using a 3-D finite element model and experimental studies based on photoelastic coating and electric extensometry. The experimental method is applied to compare two total hip prosthesis cement-free femoral stems of different philosophy. One anatomic of new generation, being of oblique setting over cancellous bone and the other madreporique of trochantero-diaphyseal support over cortical bone. (Author) 4 refs

  13. Differentiated risk models in portfolio optimization: a comparative analysis of the degree of diversification and performance in the São Paulo Stock Exchange (BOVESPA

    Directory of Open Access Journals (Sweden)

    Ivan Ricardo Gartner

    2012-08-01

    Full Text Available Faced with so many risk modeling alternatives in portfolio optimization, several questions arise regarding their legitimacy, utility and applicability. In particular, a question arises involving the adherence of the alternative models with regard to the basic presupposition of Markowitz's classical model, with regard to the concept of diversification as a means of controlling the relationship between risk and return within a process of optimization. In this context, the aim of this article is to explore the risk-differentiated configurations that entropy can provide, from the point of view of the repercussions that these have on the degree of diversification and on portfolios performance. The reach of this objective requires that a comparative analysis is made between models that include entropy in their formulation and the classic Markowitz model. In order to contribute to this debate, this article proposes that adaptations are made to the models of relative minimum entropy and of maximum entropy, so that these can be applied to investment portfolio optimizations. The comparative analysis was based on performance indicators and on a ratio of the degree of portfolio diversification. The portfolios were formed by considering a sample of fourteen assets that compose the IBOVESPA, which were projected during the period from January 2007 to December 2009, and took into account the matrices of covariance that were formed as from January 1999. When comparing the Markowitz model with two models that were constructed to represent new risk configurations based on entropy optimization, the present study concluded that the first model was far superior to the others. Not only did the Markowitz model present better accumulated nominal yields, it also presented a far greater predictive efficiency and better effective performance, when considering the trade-off between risk and return. However, with regards to diversification, the Markowitz model concentrated

  14. Comparing of four IRT models when analyzing two tests for inductive reasoning

    NARCIS (Netherlands)

    de Koning, E.; Sijtsma, K.; Hamers, J.H.M.

    2002-01-01

    This article discusses the use of the nonparametric IRT Mokken models of monotone homogeneity and double monotonicity and the parametric Rasch and Verhelst models for the analysis of binary test data. First, the four IRT models are discussed and compared at the theoretical level, and for each model,

  15. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  16. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study

    Science.gov (United States)

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias

    2018-01-01

    Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing

  17. A Comparative analysis for control rod drop accident in RETRAN DNB and CETOP DNB Model

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Ha, Sang Jun

    2009-01-01

    In Korea, the nuclear industries such as fuel manufacturer, the architect engineer and the utility, have been using the methodologies and codes of vendors, such as Westinghouse(WH), Combustion Engineering, for the safety analyses of nuclear power plants. Consequently the industries have kept up the many organizations to operate the methodologies and to maintain the codes for each vendor. It may occur difficulty to improve the safety analyses efficiency and technology related. So, the necessity another of methodologies and code systems applicable to Non- LOCA, beyond design basis accident and performance analyses for all types of pressurized water reactor(PWR) has been raised. Due to the above reason, the Korea Electric Power Research Institute(KEPRI) had decided to develop the new safety analysis code system for Korea Standard Nuclear Power Plants in Korea. As the first requirement, the best-estimate codes were required for applicable wider application area and realistic behavior prediction of power plants with various and sophisticated functions. After the investigation for few candidates, RETRAN-3D has been chosen as a system analysis code. As a part of the feasibility estimation for the methodology and code system, CRD(Control Rod Drop) accident which an event of Non-LOCA accidents for Uljin units 3 and 4 and Yonggwang 1 and 2 was selected to verify the feasibility of the methodology using the RETRAN-3D. In this paper, RETRAN DNB Model and CETOP DNB Model were analyzed by using comparative method

  18. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  19. An application of a grey data envelopment analysis model to the risk comparative analysis among power generation technologies

    International Nuclear Information System (INIS)

    Garcia, Pauli A.A.; Melo, P.F. Frutuoso e

    2005-01-01

    The comparative risk analysis is a technique for which one seeks equilibrium among benefits, costs, and risks associated with common-purpose activities performed. In light of the ever-growing world demand for a sustainable power supply, we present in this paper a comparison among different power generation technologies. The data for the comparative analyses has been taken from the literature. A hybrid approach is proposed for performing the comparisons, in which the Grey System Theory and the Data Envelopment Analysis (DEA) are combined. The purpose of this combination is to take into account different features that influence the risk analysis, when one aims to compare different power generation technologies. The generation technologies considered here are: solar, biomass, wind, hydroelectric, oil, natural gas, coal, and nuclear. The criteria considered in the analysis are: contribution to the life expectancy reduction (in years); contribution to the life expectancy growth (in years); used area (in km 2 ); tons of released CO 2 per GWh generated. The results obtained by using the aforementioned approach are promising and demonstrate the advantages of the Grey-DEA approach for the problem at hand. The results show that investments in the nuclear and solar power generation technologies are the options that present the best relative efficiencies, that is, among all considered options, they presented the best cost-benefit-risk relationships. (author)

  20. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    Science.gov (United States)

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  1. Comparative analysis of diffused solar radiation models for optimum tilt angle determination for Indian locations

    International Nuclear Information System (INIS)

    Yadav, P.; Chandel, S.S.

    2014-01-01

    Tilt angle and orientation greatly are influenced on the performance of the solar photo voltaic panels. The tilt angle of solar photovoltaic panels is one of the important parameters for the optimum sizing of solar photovoltaic systems. This paper analyses six different isotropic and anisotropic diffused solar radiation models for optimum tilt angle determination. The predicted optimum tilt angles are compared with the experimentally measured values for summer season under outdoor conditions. The Liu and Jordan model is found to exhibit t lowest error as compared to other models for the location. (author)

  2. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    Science.gov (United States)

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  3. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    Science.gov (United States)

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  4. Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis

    Science.gov (United States)

    Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.

    2016-04-01

    Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition

  5. A Comparative Analysis of Software Engineering with Mature Engineering Disciplines Using a Problem-Solving Perspective

    NARCIS (Netherlands)

    Tekinerdogan, B.; Aksit, Mehmet; Dogru, Ali H.; Bicer, Veli

    2011-01-01

    Software engineering is compared with traditional engineering disciplines using a domain specific problem-solving model called Problem-Solving for Engineering Model (PSEM). The comparative analysis is performed both from a historical and contemporary view. The historical view provides lessons on the

  6. Comparative analysis of the CRDA using BNL-TWIGL and RAMONA-3B

    International Nuclear Information System (INIS)

    Neogy, P.; Carew, J.F.

    1983-06-01

    A comparative analysis of the BWR control rod drop accident (CRDA) using BNL-TWIGL and RAMONA-3B has been performed as part of the BNL/NRC evaluation of methods currently used to analyze BWR CRDA events. A principal objective of this analysis was to test the two-dimensional neutronics model used in BNL-TWIGL aganist the full three-dimensional model in RAMONA-3B. Additionally, the results of analyzing the identical transient with the two codes were expected to help evaluate other approximate models used, such as the coarse mesh nodal neutronics scheme in RAMONA-3B and the equilibrium bulk boiling model in BNL-TWIGL

  7. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis.

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    Full Text Available Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models.Two types of dental models (intraoral scan and plaster model of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model.There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm.The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models.

  8. Validity of Intraoral Scans Compared with Plaster Models: An In-Vivo Comparison of Dental Measurements and 3D Surface Analysis

    Science.gov (United States)

    2016-01-01

    Purpose Dental measurements have been commonly taken from plaster dental models obtained from alginate impressions can. Through the use of an intraoral scanner, digital impressions now acquire the information directly from the mouth. The purpose of this study was to determine the validity of the intraoral scans compared to plaster models. Materials and Methods Two types of dental models (intraoral scan and plaster model) of 20 subjects were included in this study. The subjects had impressions taken of their teeth and made as plaster model. In addition, their mouths were scanned with the intraoral scanner and the scans were converted into digital models. Eight transverse and 16 anteroposterior measurements, 24 tooth heights and widths were recorded on the plaster models with a digital caliper and on the intraoral scan with 3D reverse engineering software. For 3D surface analysis, the two models were superimposed by using best-fit algorithm. The average differences between the two models at all points on the surfaces were computed. Paired t-test and Bland-Altman plot were used to determine the validity of measurements from the intraoral scan compared to those from the plaster model. Results There were no significant differences between the plaster models and intraoral scans, except for one measurement of lower intermolar width. The Bland-Altman plots of all measurements showed that differences between the two models were within the limits of agreement. The average surface difference between the two models was within 0.10 mm. Conclusions The results of the present study indicate that the intraoral scans are clinically acceptable for diagnosis and treatment planning in dentistry and can be used in place of plaster models. PMID:27304976

  9. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  10. Living network meta-analysis compared with pairwise meta-analysis in comparative effectiveness research: empirical study.

    Science.gov (United States)

    Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia

    2018-02-28

    To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (Pmeta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided

  11. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    Science.gov (United States)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  12. A comparative analysis of diffusion and transport models applying to releases in the marine environment

    International Nuclear Information System (INIS)

    Mejon, M.J.

    1984-05-01

    This study is a contribution to the development of methodologies allowing to assess the radiological impact of liquid effluent releases from nuclear power plants. It first concerns hydrodynamics models and their applications to the North sea, which is of great interest to the European Community. Starting from basic equations of geophysical fluid mechanics, the assumptions made at each step in order to simplifly resolution are analysed and commented. The results published on the application of the Liege University models (NIHOUL, RONDAY et al.) are compared to observations both on tides and tempests and residual circulation which is responsible for the long-terme transport of pollutants. The results for residual circulation compare satisfactorily, and the expected accuracy of the other models is indicated. A dispersion model by the same authors is then studied with a numerical integration method using a moving grid. Others models (Laboratoire National d'Hydraulique, EDF) used for the Channel, are also presented [fr

  13. Comparative Analysis of Resonant Converters for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Vuchev Stoyan

    2017-01-01

    Full Text Available The following paperwork presents a comparative analysis of multiphase resonant converters for applications in energy storage systems. Models of the examined converters are developed in the software environments of MATLAB and LTspice. Results from the simulation examination of the converters during charging of supercapacitors and rechargeable batteries are presented. These results are compared to results obtained from experimental examination of the converters via a laboratory stand. For the purposes of the experimental examination, a control system is developed on the base of a virtual instrument in LabVIEW. The advantages and disadvantages of the different converters are discussed.

  14. Two sustainable energy system analysis models

    DEFF Research Database (Denmark)

    Lund, Henrik; Goran Krajacic, Neven Duic; da Graca Carvalho, Maria

    2005-01-01

    This paper presents a comparative study of two energy system analysis models both designed with the purpose of analysing electricity systems with a substantial share of fluctuating renewable energy....

  15. State regulation of nuclear sector: comparative study of Argentina and Brazil models

    International Nuclear Information System (INIS)

    Monteiro Filho, Joselio Silveira

    2004-08-01

    This research presents a comparative assessment of the regulation models of the nuclear sector in Argentina - under the responsibility of the Autoridad Regulatoria Nuclear (ARN), and Brazil - under the responsibility of Comissao Nacional de Energia Nuclear (CNEN), trying to identify which model is more adequate aiming the safe use of nuclear energy. Due to the methodology adopted, the theoretical framework resulted in criteria of analysis that corresponds to the characteristics of the Brazilian regulatory agencies created for other economic sector during the State reform staring in the middle of the nineties. Later, these criteria of analysis were used as comparison patterns between the regulation models of the nuclear sectors of Argentina and Brazil. The comparative assessment showed that the regulatory structure of the nuclear sector in Argentina seems to be more adequate, concerning the safe use of nuclear energy, than the model adopted in Brazil by CNEN, because its incorporates the criteria of functional, institutional and financial independence, competence definitions, technical excellence and transparency, indispensable to the development of its functions with autonomy, ethics, exemption and agility. (author)

  16. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    Science.gov (United States)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  17. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  18. A comparative analysis of molten corium-concrete interaction models employed in MELCOR and MAAP codes

    International Nuclear Information System (INIS)

    Park, Soo Yong; Song, Y. M.; Kim, D. H.; Kim, H. D.

    1999-03-01

    The purpose of this report are to identify the modelling differences by review phenomenological models related to MCCI, and to investigate modelling uncertainty by performing sensitivity analysis, and finally to identify models to be improved in MELCOR. As the results, the most important uncertain parameter in the MCCI area is the debris stratification/mixing, and heat transfer between molten corium and overlying water pool. MAAP has a very simple and flexible corium-water heat transfer model, which seems to be needed in MELCOR for evaluation of real plants as long as large phenomenological uncertainty still exists. During the corium-concrete interaction, there is a temperature distribution inside basemat concrete. This would affect the amount or timing of gas generation. While MAAP calculates the temperature distribution through nodalization methodology, MELCOR calculates concrete response based on one-dimensional steady-state ablation, with no consideration given to conduction into the concrete or to decomposition in advanced of the ablation front. The code may be inaccurate for analysis of combustible gas generation during MCCI. Thus there is a necessity to improve the concrete decomposition model in MELCOR. (Author). 12 refs., 5 tabs., 42 figs

  19. Comparative analysis of a hypothetical coolant loss accident in an LMFB reactor with the use of various calculation models for a common reference problem

    International Nuclear Information System (INIS)

    Royl, P.

    1979-01-01

    The results of a comparative analysis of the initial and dismantling stages of a hypothetical loss of flow accident in an LMFB reactor are presented. The analyses were made for a common reference problem with four different calculation models (CARMEN/KADIS, SURDYN, CAPRI/KADIS and FRAX). The reference core is described specifically, as are the differences in its geometrical disposition in the models, the static and transient conditions before and after the start of boiling and during dismantling. The differences in the models used for simulating the boiling and the dismantling are compared. The structure of the core, as well as the calculation conditions and hypotheses, were intentionally designed so that the accident would culminate, in all cases, in an energetic hydrodynamic dismantling stage

  20. MycoCAP - Mycobacterium Comparative Analysis Platform.

    Science.gov (United States)

    Choo, Siew Woh; Ang, Mia Yang; Dutta, Avirup; Tan, Shi Yang; Siow, Cheuk Chuen; Heydari, Hamed; Mutha, Naresh V R; Wee, Wei Yee; Wong, Guat Jah

    2015-12-15

    Mycobacterium spp. are renowned for being the causative agent of diseases like leprosy, Buruli ulcer and tuberculosis in human beings. With more and more mycobacterial genomes being sequenced, any knowledge generated from comparative genomic analysis would provide better insights into the biology, evolution, phylogeny and pathogenicity of this genus, thus helping in better management of diseases caused by Mycobacterium spp.With this motivation, we constructed MycoCAP, a new comparative analysis platform dedicated to the important genus Mycobacterium. This platform currently provides information of 2108 genome sequences of at least 55 Mycobacterium spp. A number of intuitive web-based tools have been integrated in MycoCAP particularly for comparative analysis including the PGC tool for comparison between two genomes, PathoProT for comparing the virulence genes among the Mycobacterium strains and the SuperClassification tool for the phylogenic classification of the Mycobacterium strains and a specialized classification system for strains of Mycobacterium abscessus. We hope the broad range of functions and easy-to-use tools provided in MycoCAP makes it an invaluable analysis platform to speed up the research discovery on mycobacteria for researchers. Database URL: http://mycobacterium.um.edu.my.

  1. Assessing the Goodness of Fit of Phylogenetic Comparative Methods: A Meta-Analysis and Simulation Study.

    Directory of Open Access Journals (Sweden)

    Dwueng-Chwuan Jhwueng

    Full Text Available Phylogenetic comparative methods (PCMs have been applied widely in analyzing data from related species but their fit to data is rarely assessed.Can one determine whether any particular comparative method is typically more appropriate than others by examining comparative data sets?I conducted a meta-analysis of 122 phylogenetic data sets found by searching all papers in JEB, Blackwell Synergy and JSTOR published in 2002-2005 for the purpose of assessing the fit of PCMs. The number of species in these data sets ranged from 9 to 117.I used the Akaike information criterion to compare PCMs, and then fit PCMs to bivariate data sets through REML analysis. Correlation estimates between two traits and bootstrapped confidence intervals of correlations from each model were also compared.For phylogenies of less than one hundred taxa, the Independent Contrast method and the independent, non-phylogenetic models provide the best fit.For bivariate analysis, correlations from different PCMs are qualitatively similar so that actual correlations from real data seem to be robust to the PCM chosen for the analysis. Therefore, researchers might apply the PCM they believe best describes the evolutionary mechanisms underlying their data.

  2. Mathematical field models of brushless DC motors with permanent magnets and their comparative analysis

    Directory of Open Access Journals (Sweden)

    A.V. Matyuschenko

    2015-03-01

    Full Text Available By means of JMAG-Designer 12 the author performed a comparative analysis of the calculation of the EMF, cogging torque and electromagnetic torque of brushless motor with permanent magnets in two-dimensional and three-dimensional formulation of the problem.

  3. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    Science.gov (United States)

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  4. Comparative Analysis and Modeling of the Severity of Steatohepatitis in DDC-Treated Mouse Strains

    Science.gov (United States)

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Background Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. Methodology and Results In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. Conclusions We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J

  5. Comparative analysis and modeling of the severity of steatohepatitis in DDC-treated mouse strains.

    Science.gov (United States)

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J on the one hand and PWD/PhJ on the other.

  6. Comparative analysis and modeling of the severity of steatohepatitis in DDC-treated mouse strains.

    Directory of Open Access Journals (Sweden)

    Vikash Pandey

    Full Text Available BACKGROUND: Non-alcoholic fatty liver disease (NAFLD has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. METHODOLOGY AND RESULTS: In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA and S-adenosylmethionine (SAMe metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. CONCLUSIONS: We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE, 15-hydroxyeicosatetraenoic acid (15-HETE and prostaglandin D2 (PGD2 are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A

  7. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  8. Using structural equation modeling for network meta-analysis.

    Science.gov (United States)

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison

  9. A Comparative Analysis of Polish and Czech International New Ventures

    Directory of Open Access Journals (Sweden)

    Lidia Danik

    2016-06-01

    Full Text Available The goal of this paper is to compare the characteristics of Polish and Czech companies which follow the Born Global internationalization model. More concretely, the analysis aims to discover the differences or similarities in terms of the internationalization paths of Polish and Czech SMEs in the characteristics of their managers in terms of the so-called “international vision” and in their innovativeness level. The introductory part of article provides a description of this internationalization model and the International New Ventures traits (INV and summarizes the recent studies on this topic conducted in Poland and Czech Republic. In the empirical part, the International New Ventures from the two countries are compared. The Polish sample includes 105 companies which were surveyed with use of computer assisted telephone interviews in autumn 2014. For the Czech Republic, the sample consists of 54 small and medium-sized companies, which were surveyed using the computer assisted web interviews from November 2013 till January 2014. The surveyed companies in both countries fulfilled the definition of Born Globals. Descriptive statistics, cross-tabulation analysis and non-parametric tests are applied to accomplish the goals of the paper.

  10. Comparative uncertainty analysis of copper loads in stormwater systems using GLUE and grey-box modeling

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Madsen, Henrik; Mikkelsen, Peter Steen

    2007-01-01

    . With the proposed model and input data, the GLUE analysis show that the total sampled copper mass can be predicted within a range of +/- 50% of the median value ( 385 g), whereas the grey-box analysis showed a prediction uncertainty of less than +/- 30%. Future work will clarify the pros and cons of the two methods...

  11. Comparative analysis of different methods of modelling of most loaded fuel pin in transients

    International Nuclear Information System (INIS)

    Ovdiyenko, Y.; Khalimonchuk, V.; Ieremenko, M.

    2007-01-01

    Different methods of modeling of most loaded fuel pin are presented at the work. Calculation studies are performed on example of accident related to WWER-1000 cluster rod ejection with using of spatial kinetic code DYN3D that uses nodal method to calculate distribution of neutron flux in the core. Three methods of modeling of most loaded fuel pin are considered - flux reconstruction in fuel macrocell, pin-by-pin calculation by using of DYN3D/DERAB package and by introducing of additional 'hot channel'. Obtained results of performed studies could be used for development of calculation kinetic models during preparing of safety analysis report (Authors)

  12. Structural modelling and comparative analysis of homologous, analogous and specific proteins from Trypanosoma cruzi versus Homo sapiens: putative drug targets for chagas' disease treatment.

    Science.gov (United States)

    Capriles, Priscila V S Z; Guimarães, Ana C R; Otto, Thomas D; Miranda, Antonio B; Dardenne, Laurent E; Degrave, Wim M

    2010-10-29

    Trypanosoma cruzi is the etiological agent of Chagas' disease, an endemic infection that causes thousands of deaths every year in Latin America. Therapeutic options remain inefficient, demanding the search for new drugs and/or new molecular targets. Such efforts can focus on proteins that are specific to the parasite, but analogous enzymes and enzymes with a three-dimensional (3D) structure sufficiently different from the corresponding host proteins may represent equally interesting targets. In order to find these targets we used the workflows MHOLline and AnEnΠ obtaining 3D models from homologous, analogous and specific proteins of Trypanosoma cruzi versus Homo sapiens. We applied genome wide comparative modelling techniques to obtain 3D models for 3,286 predicted proteins of T. cruzi. In combination with comparative genome analysis to Homo sapiens, we were able to identify a subset of 397 enzyme sequences, of which 356 are homologous, 3 analogous and 38 specific to the parasite. In this work, we present a set of 397 enzyme models of T. cruzi that can constitute potential structure-based drug targets to be investigated for the development of new strategies to fight Chagas' disease. The strategies presented here support the concept of structural analysis in conjunction with protein functional analysis as an interesting computational methodology to detect potential targets for structure-based rational drug design. For example, 2,4-dienoyl-CoA reductase (EC 1.3.1.34) and triacylglycerol lipase (EC 3.1.1.3), classified as analogous proteins in relation to H. sapiens enzymes, were identified as new potential molecular targets.

  13. Modeling and Analysis of Space Based Transceivers

    Science.gov (United States)

    Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.

    2007-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  14. Comparative analysis of old, recycled and new PV modules

    Directory of Open Access Journals (Sweden)

    Haroon Ashfaq

    2017-01-01

    Full Text Available This paper presents comparative analysis of old, recycled and new PV modules. It is possible to recycle even very old products by modern standard processes in a value-conserving manner. About 90% of the materials recovered from solar panels can be recycled into useful products. Carbon emission and energy cost are low in manufacturing recycled SPV. Modules can be manufactured with recycled materials and reinstalled in systems as a full quality product with today’s technology good for another 25–30 years. Analysis of all the models of PV module is done with the help of MATLAB. This helps in comparison and proves the effectiveness of the recycled PV module based systems.

  15. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  16. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...

  17. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    Science.gov (United States)

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  18. Comparative analysis of franchising in international markets

    Directory of Open Access Journals (Sweden)

    Kovačević Maja

    2016-01-01

    Full Text Available The growing role of franchising at the global level requires its further improvement. This business model has great business potential, especially in the Serbian market, given the current underdevelopment and inaccessibility of information. At the core of our research, we outlined the characteristics of this business model, its impact on business development and at the same time we tried to draw the attention of domestic business entities to the benefits of franchising as a modern way of doing business. We start our research with a focus on the comparative analysis of Serbia, as a very poorly developed market. We then discuss the concept of franchising in Europe, with a special focus on Poland as a country that is ready to export franchising systems, and we continue by providing comparisons with the world's largest markets, namely, the USA and Canada. In this paper, we tried to elaborate on the economic viability of this project, as well as the increasing expansion and importance franchising has been experiencing in the last few years. Emphasis is placed on the use of franchise in many areas of business where there is the possibility of implementing both business models.

  19. Comparative assessment of condensation models for horizontal tubes

    International Nuclear Information System (INIS)

    Schaffrath, A.; Kruessenberg, A.K.; Lischke, W.; Gocht, U.; Fjodorow, A.

    1999-01-01

    The condensation in horizontal tubes plays an important role e.g. for the determination of the operation mode of horizontal steam generators of VVER reactors or passive safety systems for the next generation of nuclear power plants. Two different approaches (HOTKON and KONWAR) for modeling this process have been undertaken by Forschungszentrum Juelich (FZJ) and University for Applied Sciences Zittau/Goerlitz (HTWS) and implemented into the 1D-thermohydraulic code ATHLET, which is developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH for the analysis of anticipated and abnormal transients in light water reactors. Although the improvements of the condensation models are developed for different applications (VVER steam generators - emergency condenser of the SWR1000) with strongly different operation conditions (e.g. the temperature difference over the tube wall in HORUS is up to 30 K and in NOKO up to 250 K, the heat flux density in HORUS is up to 40 kW/m 2 and in NOKO up to 1 GW/m 2 ) both models are now compared and assessed by Forschungszentrum Rossendorf FZR e.V. Therefore, post test calculations of selected HORUS experiments were performed with ATHLET/KONWAR and compared to existing ATHLET and ATHLET/HOTKON calculations of HTWS. It can be seen that the calculations with the extension KONWAR as well as HOTKON improve significantly the agreement between computational and experimental data. (orig.) [de

  20. Comparative secretome analysis of rat stomach under different nutritional status

    Directory of Open Access Journals (Sweden)

    Lucia L. Senin

    2015-06-01

    Full Text Available The fact that gastric surgery is at the moment the most effective treatment to fight against obesity highlights the relevance of gastric derived proteins as potential targets to treat this pathology. Taking advantage of a previously established gastric explant model for endocrine studies, the proteomic analysis of gastric secretome was performed. To validate this gastric explant system for proteomic analysis, the identification of ghrelin, a classical gastric derived peptide, was performed by MS. In addition, the differential analysis of gastric secretomes under differential nutritional status (control feeding vs fasting vs re-feeding was performed. The MS identified proteins are showed in the present manuscript. The data supplied in this article is related to the research article entitled “Comparative secretome analysis of rat stomach under different nutritional status” [1].

  1. A simplified MHD model of capillary Z-Pinch compared with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Shapolov, A.A.; Kiss, M.; Kukhlevsky, S.V. [Institute of Physics, University of Pecs (Hungary)

    2016-11-15

    The most accurate models of the capillary Z-pinches used for excitation of soft X-ray lasers and photolithography XUV sources currently are based on the magnetohydrodynamics theory (MHD). The output of MHD-based models greatly depends on details in the mathematical description, such as initial and boundary conditions, approximations of plasma parameters, etc. Small experimental groups who develop soft X-ray/XUV sources often use the simplest Z-pinch models for analysis of their experimental results, despite of these models are inconsistent with the MHD equations. In the present study, keeping only the essential terms in the MHD equations, we obtained a simplified MHD model of cylindrically symmetric capillary Z-pinch. The model gives accurate results compared to experiments with argon plasmas, and provides simple analysis of temporal evolution of main plasma parameters. The results clarify the influence of viscosity, heat flux and approximations of plasma conductivity on the dynamics of capillary Z-pinch plasmas. The model can be useful for researchers, especially experimentalists, who develop the soft X-ray/XUV sources. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  2. Four generations versus left-right symmetry. A comparative numerical analysis

    International Nuclear Information System (INIS)

    Heidsieck, Tillmann J.

    2012-01-01

    In this work, we present a comparative numerical analysis of the Standard Model (SM) with a sequential fourth generation (SM4) and the left-right symmetric Standard model (LRM). We focus on the constraints induced by flavour violating ΔF=2 processes in the K and B system while the results of studies of collider bounds and electroweak precision tests are taken into account as external inputs. In contrast to many previous studies of both models considered in this work, we do make not any ad-hoc assumptions on the structure of the relevant mixing matrices. Therefore, we employ powerful Monte Carlo methods in order to approximate the viable parameter space of the models. In preparation of our numerical analysis, we present all relevant formulae and review the different numerical methods used in this work. In order to better understand the patterns of new effects in ΔF=2 processes, we perform a fit including all relevant ΔF=2 constraints in the context of the Standard Model. The result of this fit is then used in a general discussion on new effects in ΔF=2 processes in the context of generic extensions of the Standard Model. Our numerical analysis of the SM4 and the LRM demonstrates that in both models the existing anomalies in Δ=2 processes can easily be resolved. We transparently show how the different observables are connected to each other by their dependence on combinations of mixing parameters. In our analysis of rare decays in the SM4, we establish patterns of flavour violation that could in principle be used to disprove this model on the basis of ΔF=1 processes alone. In the LRM, we discuss the importance of the contributions originating from the exchange of heavy, flavour changing, neutral Higgs bosons as well as the inability of the LRM to entirely solve the V ub problem.

  3. Four generations versus left-right symmetry. A comparative numerical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Heidsieck, Tillmann J.

    2012-06-18

    In this work, we present a comparative numerical analysis of the Standard Model (SM) with a sequential fourth generation (SM4) and the left-right symmetric Standard model (LRM). We focus on the constraints induced by flavour violating {Delta}F=2 processes in the K and B system while the results of studies of collider bounds and electroweak precision tests are taken into account as external inputs. In contrast to many previous studies of both models considered in this work, we do make not any ad-hoc assumptions on the structure of the relevant mixing matrices. Therefore, we employ powerful Monte Carlo methods in order to approximate the viable parameter space of the models. In preparation of our numerical analysis, we present all relevant formulae and review the different numerical methods used in this work. In order to better understand the patterns of new effects in {Delta}F=2 processes, we perform a fit including all relevant {Delta}F=2 constraints in the context of the Standard Model. The result of this fit is then used in a general discussion on new effects in {Delta}F=2 processes in the context of generic extensions of the Standard Model. Our numerical analysis of the SM4 and the LRM demonstrates that in both models the existing anomalies in {Delta}=2 processes can easily be resolved. We transparently show how the different observables are connected to each other by their dependence on combinations of mixing parameters. In our analysis of rare decays in the SM4, we establish patterns of flavour violation that could in principle be used to disprove this model on the basis of {Delta}F=1 processes alone. In the LRM, we discuss the importance of the contributions originating from the exchange of heavy, flavour changing, neutral Higgs bosons as well as the inability of the LRM to entirely solve the V{sub ub} problem.

  4. Comparative Analysis of Pain Behaviours in Humanized Mouse Models of Sickle Cell Anemia.

    Directory of Open Access Journals (Sweden)

    Jianxun Lei

    Full Text Available Pain is a hallmark feature of sickle cell anemia (SCA but management of chronic as well as acute pain remains a major challenge. Mouse models of SCA are essential to examine the mechanisms of pain and develop novel therapeutics. To facilitate this effort, we compared humanized homozygous BERK and Townes sickle mice for the effect of gender and age on pain behaviors. Similar to previously characterized BERK sickle mice, Townes sickle mice show more mechanical, thermal, and deep tissue hyperalgesia with increasing age. Female Townes sickle mice demonstrate more hyperalgesia compared to males similar to that reported for BERK mice and patients with SCA. Mechanical, thermal and deep tissue hyperalgesia increased further after hypoxia/reoxygenation (H/R treatment in Townes sickle mice. Together, these data show BERK sickle mice exhibit a significantly greater degree of hyperalgesia for all behavioral measures as compared to gender- and age-matched Townes sickle mice. However, the genetically distinct "knock-in" strategy of human α and β transgene insertion in Townes mice as compared to BERK mice, may provide relative advantage for further genetic manipulations to examine specific mechanisms of pain.

  5. Towards a systemic functional model for comparing forms of discourse in academic writing Towards a systemic functional model for comparing forms of discourse in academic writing

    Directory of Open Access Journals (Sweden)

    Meriel Bloor

    2008-04-01

    Full Text Available This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered and the implications of the findings are related to models of text and discourse. Recommendations are made for developing domain models that relate clusters of features to positions on a cline. This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered

  6. International Space Station Model Correlation Analysis

    Science.gov (United States)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  7. Renewable Energy and Efficiency Modeling Analysis Partnership (REMAP): An Analysis of How Different Energy Models Addressed a Common High Renewable Energy Penetration Scenario in 2025

    Energy Technology Data Exchange (ETDEWEB)

    Blair, N.; Jenkin, T.; Milford, J.; Short, W.; Sullivan, P.; Evans, D.; Lieberman, E.; Goldstein, G.; Wright, E.; Jayaraman, K. R.; Venkatesh, B.; Kleiman, G.; Namovicz, C.; Smith, B.; Palmer, K.; Wiser, R.; Wood, F.

    2009-09-01

    Energy system modeling can be intentionally or unintentionally misused by decision-makers. This report describes how both can be minimized through careful use of models and thorough understanding of their underlying approaches and assumptions. The analysis summarized here assesses the impact that model and data choices have on forecasting energy systems by comparing seven different electric-sector models. This analysis was coordinated by the Renewable Energy and Efficiency Modeling Analysis Partnership (REMAP), a collaboration among governmental, academic, and nongovernmental participants.

  8. Neutron activation analysis-comparative (NAAC)

    International Nuclear Information System (INIS)

    Zimmer, W.H.

    1979-01-01

    A software system for the reduction of comparative neutron activation analysis data is presented. Libraries are constructed to contain the elemental composition and isotopic nuclear data of an unlimited number of standards. Ratios to unknown sample data are performed by standard calibrations. Interfering peak corrections, second-order activation-product corrections, and deconvolution of multiplets are applied automatically. Passive gamma-energy analysis can be performed with the same software. 3 figures

  9. Comparative Study of Two Daylighting Analysis Methods with Regard to Window Orientation and Interior Wall Reflectance

    Directory of Open Access Journals (Sweden)

    Yeo Beom Yoon

    2014-09-01

    Full Text Available The accuracy and speed of the daylighting analysis developed for use in EnergyPlus is better than its predecessors. In EnergyPlus, the detailed method uses the Split-flux algorithm whereas the DElight method uses the Radiosity algorithm. Many existing studies have addressed the two methods, either individually or compared with other daylight analysis methods like Ray tracing but still there is lack of detailed comparative study of these two methods. Our previous studies show that the Split-flux method overestimates the illuminance, especially for the areas away from the window. The Radiosity method has the advantage of accurately predicting this illuminance because of how it deals with the diffuse light. For this study, the EnergyPlus model, which has been calibrated using data measured in a real building in previous studies, has also been used. The calibrated model has a south oriented window only. This model is then used to analyze the interior illuminance inside the room for north, west and east orientation of the window by rotating the model and by changing the wall reflectance of the model with south oriented window. Direct and diffuse component of the illuminance as well as the algorithms have been compared for a detailed analysis.

  10. Comparative cost-benefit analysis of tele-homecare for community-dwelling elderly in Japan: Non-Government versus Government Supported Funding Models.

    Science.gov (United States)

    Akiyama, Miki; Abraham, Chon

    2017-08-01

    Tele-homecare is gaining prominence as a viable care alternative, as evidenced by the increase in financial support from international governments to fund initiatives in their respective countries. The primary reason for the funding is to support efforts to reduce lags and increase capacity in access to care as well as to promote preventive measures that can avert costly emergent issues from arising. These efforts are especially important to super-aged and aging societies such as in Japan, many European countries, and the United States (US). However, to date and to our knowledge, a direct comparison of non-government vs. government-supported funding models for tele-homecare is particularly lacking in Japan. The aim of this study is to compare these operational models (i.e., non-government vs. government-supported funding) from a cost-benefit perspective. This simulation study applies to a Japanese hypothetical cohort with implications for other super-aged and aging societies abroad. We performed a cost-benefit analysis (CBA) on two operational models for enabling tele-homecare for elderly community-dwelling cohorts based on a decision tree model, which we created with parameters from published literature. The two models examined are (a) Model 1-non-government-supported funding that includes monthly fixed charges paid by users for a portion of the operating costs, and (b) Model 2-government-supported funding that includes startup and installation costs only (i.e., no operating costs) and no monthly user charges. We performed base case cost-benefit analysis and probabilistic cost-benefit analysis with a Monte Carlo simulation. We calculated net benefit and benefit-to-cost ratios (BCRs) from the societal perspective with a five-year time horizon applying a 3% discount rate for both cost and benefit values. The cost of tele-homecare included (a) the startup system expense, averaged over a five-year depreciation period, and (b) operation expenses (i.e., labor and non

  11. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Directory of Open Access Journals (Sweden)

    Daminov Ildar

    2016-01-01

    Full Text Available This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  12. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Science.gov (United States)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  13. Business models in urban farming: A comparative analysis of case studies from Spain, Italy and Germany

    Directory of Open Access Journals (Sweden)

    Pölling Bernd

    2017-09-01

    Full Text Available The “Urban Agriculture Europe” EU COST-Action (2012–2016 has shown that the complexity of urban agriculture (UA is hardly compressible into classic business management models and has proposed new management models, such as the Business Model Canvas (BMC. Business models of UA have to be different from rural ones. In particular, factors such as differentiation and diversification, but also low cost-oriented specialisation, are characteristic and necessary business models for UA to stay profitable in the long term under challenging city conditions. This paper aims to highlight how farm enterprises have to adjust to urban conditions by stepping into appropriate business models aiming to stay competitive and profitable, and how the BMC is useful to analyse their organisation and performance, both economically and socially. The paper offers an inter-regional analysis of UA enterprises located in Spain, Italy, and Germany, which are further subdivided into: local food, leisure, educational, social, therapeutic, agri-environmental, cultural heritage and experimental farms. The analysis demonstrates that UA is differentially adjusted to specific urban conditions and that the BMC is useful for analysing urban farming. Heterogeneous local food farms and the integration of local and organic food production in social farming business models are most frequent in our case studies.

  14. A Comparative Analysis of Ability of Mimicking Portfolios in Representing the Background Factors

    OpenAIRE

    Asgharian, Hossein

    2004-01-01

    Our aim is to give a comparative analysis of ability of different factor mimicking portfolios in representing the background factors. Our analysis contains a cross-sectional regression approach, a time-series regression approach and a portfolio approach for constructing factor mimicking portfolios. The focus of the analysis is the power of mimicking portfolios in the asset pricing models. We conclude that the time series regression approach, with the book-to-market sorted portfolios as the ba...

  15. Autologous Stem Cell Transplantation in Patients With Multiple Myeloma: An Activity-based Costing Analysis, Comparing a Total Inpatient Model Versus an Early Discharge Model.

    Science.gov (United States)

    Martino, Massimo; Console, Giuseppe; Russo, Letteria; Meliado', Antonella; Meliambro, Nicola; Moscato, Tiziana; Irrera, Giuseppe; Messina, Giuseppe; Pontari, Antonella; Morabito, Fortunato

    2017-08-01

    Activity-based costing (ABC) was developed and advocated as a means of overcoming the systematic distortions of traditional cost accounting. We calculated the cost of high-dose chemotherapy and autologous stem cell transplantation (ASCT) in patients with multiple myeloma using the ABC method, through 2 different care models: the total inpatient model (TIM) and the early-discharge outpatient model (EDOM) and compared this with the approved diagnosis related-groups (DRG) Italian tariffs. The TIM and EDOM models involved a total cost of €28,615.15 and €16,499.43, respectively. In the TIM model, the phase with the greatest economic impact was the posttransplant (recovery and hematologic engraftment) with 36.4% of the total cost, whereas in the EDOM model, the phase with the greatest economic impact was the pretransplant (chemo-mobilization, apheresis procedure, cryopreservation, and storage) phase, with 60.4% of total expenses. In an analysis of each episode, the TIM model comprised a higher absorption than the EDOM. In particular, the posttransplant represented 36.4% of the total costs in the TIM and 17.7% in EDOM model, respectively. The estimated reduction in cost per patient using an EDOM model was over €12,115.72. The repayment of the DRG in Calabrian Region for the ASCT procedure is €59,806. Given the real cost of the transplant, the estimated cost saving per patient is €31,190.85 in the TIM model and €43,306.57 in the EDOM model. In conclusion, the actual repayment of the DRG does not correspond to the real cost of the ASCT procedure in Italy. Moreover, using the EDOM, the cost of ASCT is approximately the half of the TIM model. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Comparative dynamics analysis on xonotlite spherical particles synthesized via hydrothermal synthesis

    Science.gov (United States)

    Liu, F.; Chen, S.; Lin, Q.; Wang, X. D.; Cao, J. X.

    2018-01-01

    The xonotlite crystals were synthesized via the hydrothermal synthesis manner from CaO and SiO2 as the raw materials with their Si/Ca molar ratio of 1.0. Comparative dynamics analysis on xonotlite spherical particles synthesized via hydrothermal synthesis process was explored in this paper. The accuracy of the dynamic equation of xonotlite spherical particles was verified by two methods, one was comparing the production rate of the xonotlite products calculated by the dynamic equation with the experimental values, and the other was comparing the apparent activation energies calculated by the dynamic equation with that calculated by the Kondo model. The results indicated that the production rates of the xonotlite spherical particles calculated by the dynamic equation were in good agreement with the experimental values and the apparent activation energy of the xonotlite spherical particles calculated by dynamic equation (84 kJ·mol-1) was close to that calculated by Kondo model (77 kJ·mol-1), verifying the high accuracy of the dynamic equation.

  17. p-adic analysis compared with real

    CERN Document Server

    Katok, Svetlana

    2007-01-01

    The book gives an introduction to p-adic numbers from the point of view of number theory, topology, and analysis. Compared to other books on the subject, its novelty is both a particularly balanced approach to these three points of view and an emphasis on topics accessible to undergraduates. In addition, several topics from real analysis and elementary topology which are not usually covered in undergraduate courses (totally disconnected spaces and Cantor sets, points of discontinuity of maps and the Baire Category Theorem, surjectivity of isometries of compact metric spaces) are also included in the book. They will enhance the reader's understanding of real analysis and intertwine the real and p-adic contexts of the book. The book is based on an advanced undergraduate course given by the author. The choice of the topic was motivated by the internal beauty of the subject of p-adic analysis, an unusual one in the undergraduate curriculum, and abundant opportunities to compare it with its much more familiar real...

  18. Comparative analysis of traditional and alternative energy sources

    Directory of Open Access Journals (Sweden)

    Adriana Csikósová

    2008-11-01

    Full Text Available The presented thesis with designation of Comparing analysis of traditional and alternative energy resources includes, on basisof theoretical information source, research in firm, internal data, trends in company development and market, descriptionof the problem and its application. Theoretical information source is dedicated to the traditional and alternative energy resources,reserves of it, trends in using and development, the balance of it in the world, EU and in Slovakia as well. Analysis of the thesisis reflecting profile of the company and the thermal pump market evaluation using General Electric method. While the companyis implementing, except other products, the thermal pumps on geothermal energy base and surround energy base (air, the missionof the comparing analysis is to compare traditional energy resources with thermal pump from the ecological, utility and economic sideof it. The results of the comparing analysis are resumed in to the SWOT analysis. The part of the thesis includes he questionnaire offerfor effectiveness improvement and customer satisfaction analysis, and expected possibilities of alternative energy resources assistance(benefits from the government and EU funds.

  19. The Evolution of the Solar Magnetic Field: A Comparative Analysis of Two Models

    Science.gov (United States)

    McMichael, K. D.; Karak, B. B.; Upton, L.; Miesch, M. S.; Vierkens, O.

    2017-12-01

    Understanding the complexity of the solar magnetic cycle is a task that has plagued scientists for decades. However, with the help of computer simulations, we have begun to gain more insight into possible solutions to the plethora of questions inside the Sun. STABLE (Surface Transport and Babcock Leighton) is a newly developed 3D dynamo model that can reproduce features of the solar cycle. In this model, the tilted bipolar sunspots are formed on the surface (based on the toroidal field at the bottom of the convection zone) and then decay and disperse, producing the poloidal field. Since STABLE is a 3D model, it is able to solve the full induction equation in the entirety of the solar convection zone as well as incorporate many free parameters (such as spot depth and turbulent diffusion) which are difficult to observe. In an attempt to constrain some of these free parameters, we compare STABLE to a surface flux transport model called AFT (Advective Flux Transport) which solves the radial component of the magnetic field on the solar surface. AFT is a state-of-the-art surface flux transport model that has a proven record of being able to reproduce solar observations with great accuracy. In this project, we implement synthetic bipolar sunspots into both models, using identical surface parameters, and run the models for comparison. We demonstrate that the 3D structure of the sunspots in the interior and the vertical diffusion of the sunspot magnetic field play an important role in establishing the surface magnetic field in STABLE. We found that when a sufficient amount of downward magnetic pumping is included in STABLE, the surface magnetic field from this model becomes insensitive to the internal structure of the sunspot and more consistent with that of AFT.

  20. With or without a conductor: Comparative analysis of leadership models in the musical ensemble

    Directory of Open Access Journals (Sweden)

    Kovačević Mia

    2016-01-01

    Full Text Available In search of innovative models of work organization and therefore the artistic process of one musical ensemble, in the last ten years musical ensembles have developed examples of non-traditional artistic-performing decisions and organizational practice. The paper is conceived as a research and analysis of the dominant models of leadership (i.e. organizing, conducting business applicable on the music ensembles and experiences of the musicians. The aim is to recognize and define leadership styles that encourage the increase of motivation and productivity of musicians within the musical ensemble. The paper will specifically investigate the relationship and differences between the two dominant models of leadership, leadership of conductor and collaborative leadership. At the same time, the paper describes and analyses an experiment that was conducted by the Ensemble Metamorphosis, which applied into their work two dominant models of leadership. In an effort to increase the motivation and productivity of musicians, Ensemble Metamorphosis also searched for a new management model of work organization and a new model of leadership. The aim of this paper was therefore to investigate the effects of leadership models that improve the artistic quality, motivation of the musicians, psychological climate and overall increase productivity of musical organization.

  1. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  2. Comparative expression pathway analysis of human and canine mammary tumors

    Directory of Open Access Journals (Sweden)

    Marconato Laura

    2009-03-01

    Full Text Available Abstract Background Spontaneous tumors in dog have been demonstrated to share many features with their human counterparts, including relevant molecular targets, histological appearance, genetics, biological behavior and response to conventional treatments. Mammary tumors in dog therefore provide an attractive alternative to more classical mouse models, such as transgenics or xenografts, where the tumour is artificially induced. To assess the extent to which dog tumors represent clinically significant human phenotypes, we performed the first genome-wide comparative analysis of transcriptional changes occurring in mammary tumors of the two species, with particular focus on the molecular pathways involved. Results We analyzed human and dog gene expression data derived from both tumor and normal mammary samples. By analyzing the expression levels of about ten thousand dog/human orthologous genes we observed a significant overlap of genes deregulated in the mammary tumor samples, as compared to their normal counterparts. Pathway analysis of gene expression data revealed a great degree of similarity in the perturbation of many cancer-related pathways, including the 'PI3K/AKT', 'KRAS', 'PTEN', 'WNT-beta catenin' and 'MAPK cascade'. Moreover, we show that the transcriptional relationships between different gene signatures observed in human breast cancer are largely maintained in the canine model, suggesting a close interspecies similarity in the network of cancer signalling circuitries. Conclusion Our data confirm and further strengthen the value of the canine mammary cancer model and open up new perspectives for the evaluation of novel cancer therapeutics and the development of prognostic and diagnostic biomarkers to be used in clinical studies.

  3. A comparative study of the use of different risk-assessment models in Danish municipalities

    DEFF Research Database (Denmark)

    Sørensen, Kresta Munkholt

    2018-01-01

    Risk-assessment models are widely used in casework involving vulnerable children and families. Internationally, there are a number of different kinds of models with great variation in regard to the characteristics of factors that harm children. Lists of factors have been made but most of them give...... very little advice on how the factors should be weighted. This paper will address the use of risk-assessment models in six different Danish municipalities. The paper presents a comparative analysis and discussion of differences and similarities between three models: the Integrated Children’s System...... (ICS), the Signs of Safety (SoS) model and models developed by the municipalities themselves (MM). The analysis will answer the following two key questions: (i) to which risk and protective factors do the caseworkers give most weight in the risk assessment? and (ii) does each of the different models...

  4. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    Science.gov (United States)

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  5. Embedded Hyperchaotic Generators: A Comparative Analysis

    Science.gov (United States)

    Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas

    In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.

  6. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  7. Comparative Analysis of Methodologies for Landscape Ecological Aesthetics in Urban Planning

    Directory of Open Access Journals (Sweden)

    Maija Jankevica

    2012-05-01

    Full Text Available Areas with high level of urbanisation provoke frequent conflicts between nature and people. There is a lack of cooperation between planners and nature scientists in urban studies and planning process. Landscapes usually are studied using the ecological and aesthetical approaches separately. However, the future of urban planning depends on integration of these two approaches. This research study looks into different methods of landscape ecological aesthetics and presents a combined method for urban areas. The methods of landscape visual aesthetical assessment, biotope structure analysis, landscape ecology evaluation and multi-disciplinary expert level are compared in the article. A comparison of obtained values is summarized by making a comparative matrix. As a result, a multi-stage model for landscape ecological aesthetics evaluation in urban territories is presented. This ecological aesthetics model can be successfully used for development of urban territories.

  8. Thermal buckling comparative analysis using Different FE (Finite Element) tools

    Energy Technology Data Exchange (ETDEWEB)

    Banasiak, Waldemar; Labouriau, Pedro [INTECSEA do Brasil, Rio de Janeiro, RJ (Brazil); Burnett, Christopher [INTECSEA UK, Surrey (United Kingdom); Falepin, Hendrik [Fugro Engineers SA/NV, Brussels (Belgium)

    2009-12-19

    High operational temperature and pressure in offshore pipelines may lead to unexpected lateral movements, sometimes call lateral buckling, which can have serious consequences for the integrity of the pipeline. The phenomenon of lateral buckling in offshore pipelines needs to be analysed in the design phase using FEM. The analysis should take into account many parameters, including operational temperature and pressure, fluid characteristic, seabed profile, soil parameters, coatings of the pipe, free spans etc. The buckling initiation force is sensitive to small changes of any initial geometric out-of-straightness, thus the modeling of the as-laid state of the pipeline is an important part of the design process. Recently some dedicated finite elements programs have been created making modeling of the offshore environment more convenient that has been the case with the use of general purpose finite element software. The present paper aims to compare thermal buckling analysis of sub sea pipeline performed using different finite elements tools, i.e. general purpose programs (ANSYS, ABAQUS) and dedicated software (SAGE Profile 3D) for a single pipeline resting on an the seabed. The analyses considered the pipeline resting on a flat seabed with a small levels of out-of straightness initiating the lateral buckling. The results show the quite good agreement of results of buckling in elastic range and in the conclusions next comparative analyses with sensitivity cases are recommended. (author)

  9. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...

  10. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  11. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  12. Disability in Mexico: a comparative analysis between descriptive models and historical periods using a timeline

    Directory of Open Access Journals (Sweden)

    Hugo Sandoval

    2017-07-01

    Full Text Available Some interpretations frequently argue that three Disability Models (DM (Charity, Medical/Rehabilitation, and Social correspond to historical periods in terms of chronological succession. These views permeate a priori within major official documents on the subject in Mexico. This paper intends to test whether this association is plausible by applying a timeline method. A document search was made with inclusion and exclusion criteria in databases to select representative studies with which to depict milestones in the timelines for each period. The following is demonstrated: 1 models should be considered as categories of analysis and not as historical periods, in that the prevalence of elements of the three models is present to date, and 2 the association between disability models and historical periods results in teleological interpretations of the history of disability in Mexico.

  13. Comparing methods of classifying life courses: Sequence analysis and latent class analysis

    NARCIS (Netherlands)

    Elzinga, C.H.; Liefbroer, Aart C.; Han, Sapphire

    2017-01-01

    We compare life course typology solutions generated by sequence analysis (SA) and latent class analysis (LCA). First, we construct an analytic protocol to arrive at typology solutions for both methodologies and present methods to compare the empirical quality of alternative typologies. We apply this

  14. Comparing methods of classifying life courses: sequence analysis and latent class analysis

    NARCIS (Netherlands)

    Han, Y.; Liefbroer, A.C.; Elzinga, C.

    2017-01-01

    We compare life course typology solutions generated by sequence analysis (SA) and latent class analysis (LCA). First, we construct an analytic protocol to arrive at typology solutions for both methodologies and present methods to compare the empirical quality of alternative typologies. We apply this

  15. Comparative analysis of different methods in mathematical modelling of the recuperative heat exchangers

    International Nuclear Information System (INIS)

    Debeljkovic, D.Lj.; Stevic, D.Z.; Simeunovic, G.V.; Misic, M.A.

    2015-01-01

    The heat exchangers are frequently used as constructive elements in various plants and their dynamics is very important. Their operation is usually controlled by manipulating inlet fluid temperatures or mass flow rates. On the basis of the accepted and critically clarified assumptions, a linearized mathematical model of the cross-flow heat exchanger has been derived, taking into account the wall dynamics. The model is based on the fundamental law of energy conservation, covers all heat accumulation storages in the process, and leads to the set of partial differential equations (PDE), which solution is not possible in closed form. In order to overcome the solutions difficulties in this paper are analyzed different methods for modeling the heat exchanger: approach based on Laplace transformation, approximation of partial differential equations based on finite differences, the method of physical discretization and the transport approach. Specifying the input temperatures and output variables, under the constant initial conditions, the step transient responses have been simulated and presented in graphic form in order to compare these results for the four characteristic methods considered in this paper, and analyze its practical significance. (author)

  16. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    Science.gov (United States)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  17. Transitions in state public health law: comparative analysis of state public health law reform following the Turning Point Model State Public Health Act.

    Science.gov (United States)

    Meier, Benjamin Mason; Hodge, James G; Gebbie, Kristine M

    2009-03-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health.

  18. A Comparative Meta-Analysis of 5E and Traditional Approaches in Turkey

    Science.gov (United States)

    Anil, Özgür; Batdi, Veli

    2015-01-01

    The aim of this study is to compare the 5E learning model with traditional learning methods in terms of their effect on students' academic achievement, retention and attitude scores. In this context, the meta-analytic method known as the "analysis of analyses" was used and a review undertaken of the studies and theses (N = 14) executed…

  19. Nuclear power ecology: comparative analysis

    International Nuclear Information System (INIS)

    Trofimenko, A.P.; Lips'ka, A.Yi.; Pisanko, Zh.Yi.

    2005-01-01

    Ecological effects of different energy sources are compared. Main actions for further nuclear power development - safety increase and waste management, are noted. Reasons of restrained public position to nuclear power and role of social and political factors in it are analyzed. An attempt is undertaken to separate real difficulties of nuclear power from imaginary ones that appear in some mass media. International actions of environment protection are noted. Risk factors at different energy source using are compared. The results of analysis indicate that ecological influence and risk for nuclear power are of minimum

  20. Comparative Admittance-based Analysis for Different Droop Control Approaches in DC Microgrids

    DEFF Research Database (Denmark)

    Jin, Zheming; Meng, Lexuan; Guerrero, Josep M.

    2017-01-01

    difference in control architecture. In this paper, a comparative admittance-based analysis is carried out between these two approaches. State-space models and more general analytical models are established to derive the output admittance of droop-controlled converter in DC microgrids. Simulations......In DC microgrids, virtual resistance based droop control is broadly used as the fundamental coordination method. As the virtual resistance guarantees load sharing effect in steady states, the output admittance determines the dynamic response of converters in transient states, which is critical...

  1. Model-based meta-analysis for comparing Vitamin D2 and D3 parent-metabolite pharmacokinetics.

    Science.gov (United States)

    Ocampo-Pelland, Alanna S; Gastonguay, Marc R; Riggs, Matthew M

    2017-08-01

    Association of Vitamin D (D3 & D2) and its 25OHD metabolite (25OHD3 & 25OHD2) exposures with various diseases is an active research area. D3 and D2 dose-equivalency and each form's ability to raise 25OHD concentrations are not well-defined. The current work describes a population pharmacokinetic (PK) model for D2 and 25OHD2 and the use of a previously developed D3-25OHD3 PK model [1] for comparing D3 and D2-related exposures. Public-source D2 and 25OHD2 PK data in healthy or osteoporotic populations, including 17 studies representing 278 individuals (15 individual-level and 18 arm-level units), were selected using search criteria in PUBMED. Data included oral, single and multiple D2 doses (400-100,000 IU/d). Nonlinear mixed effects models were developed simultaneously for D2 and 25OHD2 PK (NONMEM v7.2) by considering 1- and 2-compartment models with linear or nonlinear clearance. Unit-level random effects and residual errors were weighted by arm sample size. Model simulations compared 25OHD exposures, following repeated D2 and D3 oral administration across typical dosing and baseline ranges. D2 parent and metabolite were each described by 2-compartment models with numerous parameter estimates shared with the D3-25OHD3 model [1]. Notably, parent D2 was eliminated (converted to 25OHD) through a first-order clearance whereas the previously published D3 model [1] included a saturable non-linear clearance. Similar to 25OHD3 PK model results [1], 25OHD2 was eliminated by a first-order clearance, which was almost twice as fast as the former. Simulations at lower baselines, following lower equivalent doses, indicated that D3 was more effective than D2 at raising 25OHD concentrations. Due to saturation of D3 clearance, however, at higher doses or baselines, the probability of D2 surpassing D3's ability to raise 25OHD concentrations increased substantially. Since 25OHD concentrations generally surpassed 75 nmol/L at these higher baselines by 3 months, there would be no

  2. A comparative analysis of Chikungunya and Zika transmission

    Directory of Open Access Journals (Sweden)

    Julien Riou

    2017-06-01

    Full Text Available The recent global dissemination of Chikungunya and Zika has fostered public health concern worldwide. To better understand the drivers of transmission of these two arboviral diseases, we propose a joint analysis of Chikungunya and Zika epidemics in the same territories, taking into account the common epidemiological features of the epidemics: transmitted by the same vector, in the same environments, and observed by the same surveillance systems. We analyse eighteen outbreaks in French Polynesia and the French West Indies using a hierarchical time-dependent SIR model accounting for the effect of virus, location and weather on transmission, and based on a disease specific serial interval. We show that Chikungunya and Zika have similar transmission potential in the same territories (transmissibility ratio between Zika and Chikungunya of 1.04 [95% credible interval: 0.97; 1.13], but that detection and reporting rates were different (around 19% for Zika and 40% for Chikungunya. Temperature variations between 22 °C and 29 °C did not alter transmission, but increased precipitation showed a dual effect, first reducing transmission after a two-week delay, then increasing it around five weeks later. The present study provides valuable information for risk assessment and introduces a modelling framework for the comparative analysis of arboviral infections that can be extended to other viruses and territories.

  3. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  4. A comparative analysis of simulated and observed photosynthetic CO2 uptake in two coniferous forest canopies

    DEFF Research Database (Denmark)

    Ibrom, A.; Jarvis, P.G.; Clement, R.

    2006-01-01

    -photosynthetically-active-radiation-induced biophysical variability in the simulated Pg. Analysis of residuals identified only small systematic differences between the modeled flux estimates and turbulent flux measurements at high vapor pressure saturation deficits. The merits and limitations of comparative analysis for quality evaluation of both...

  5. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Survey sequencing and comparative analysis of the elephant shark (Callorhinchus milii genome.

    Directory of Open Access Journals (Sweden)

    Byrappa Venkatesh

    2007-04-01

    Full Text Available Owing to their phylogenetic position, cartilaginous fishes (sharks, rays, skates, and chimaeras provide a critical reference for our understanding of vertebrate genome evolution. The relatively small genome of the elephant shark, Callorhinchus milii, a chimaera, makes it an attractive model cartilaginous fish genome for whole-genome sequencing and comparative analysis. Here, the authors describe survey sequencing (1.4x coverage and comparative analysis of the elephant shark genome, one of the first cartilaginous fish genomes to be sequenced to this depth. Repetitive sequences, represented mainly by a novel family of short interspersed element-like and long interspersed element-like sequences, account for about 28% of the elephant shark genome. Fragments of approximately 15,000 elephant shark genes reveal specific examples of genes that have been lost differentially during the evolution of tetrapod and teleost fish lineages. Interestingly, the degree of conserved synteny and conserved sequences between the human and elephant shark genomes are higher than that between human and teleost fish genomes. Elephant shark contains putative four Hox clusters indicating that, unlike teleost fish genomes, the elephant shark genome has not experienced an additional whole-genome duplication. These findings underscore the importance of the elephant shark as a critical reference vertebrate genome for comparative analysis of the human and other vertebrate genomes. This study also demonstrates that a survey-sequencing approach can be applied productively for comparative analysis of distantly related vertebrate genomes.

  7. Comparative study: TQ and Lean Production ownership models in health services.

    Science.gov (United States)

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. this is a qualitative research that was conducted through a descriptive case study. through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  8. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline

    OpenAIRE

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S.; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M.; Tettelin, Herv?; White, Owen; Angiuoli, Samuel V.; Mahurkar, Anup; Fricke, W. Florian

    2017-01-01

    Background The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. Results CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. ...

  9. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  10. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  11. Nursing home quality: a comparative analysis using CMS Nursing Home Compare data to examine differences between rural and nonrural facilities.

    Science.gov (United States)

    Lutfiyya, May Nawal; Gessert, Charles E; Lipsky, Martin S

    2013-08-01

    Advances in medicine and an aging US population suggest that there will be an increasing demand for nursing home services. Although nursing homes are highly regulated and scrutinized, their quality remains a concern and may be a greater issue to those living in rural communities. Despite this, few studies have investigated differences in the quality of nursing home care across the rural-urban continuum. The purpose of this study was to compare the quality of rural and nonrural nursing homes by using aggregated rankings on multiple quality measures calculated by the Centers for Medicare and Medicaid Services and reported on their Nursing Home Compare Web site. Independent-sample t tests were performed to compare the mean ratings on the reported quality measures of rural and nonrural nursing homes. A linear mixed binary logistic regression model controlling for state was performed to determine if the covariates of ownership, number of beds, and geographic locale were associated with a higher overall quality rating. Of the 15,177 nursing homes included in the study sample, 69.2% were located in nonrural areas and 30.8% in rural areas. The t test analysis comparing the overall, health inspection, staffing, and quality measure ratings of rural and nonrural nursing homes yielded statistically significant results for 3 measures, 2 of which (overall ratings and health inspections) favored rural nursing homes. Although a higher percentage of nursing homes (44.8%-42.2%) received a 4-star or higher rating, regression analysis using an overall rating of 4 stars or higher as the dependent variable revealed that when controlling for state and adjusting for size and ownership, rural nursing homes were less likely to have a 4-star or higher rating when compared with nonrural nursing homes (OR = .901, 95% CI 0.824-0.986). Mixed model logistic regression analysis suggested that rural nursing home quality was not comparable to that of nonrural nursing homes. When controlling for

  12. Skull Development, Ossification Pattern, and Adult Shape in the Emerging Lizard Model Organism Pogona vitticeps: A Comparative Analysis With Other Squamates

    Directory of Open Access Journals (Sweden)

    Joni Ollonen

    2018-03-01

    Full Text Available The rise of the Evo-Devo field and the development of multidisciplinary research tools at various levels of biological organization have led to a growing interest in researching for new non-model organisms. Squamates (lizards and snakes are particularly important for understanding fundamental questions about the evolution of vertebrates because of their high diversity and evolutionary innovations and adaptations that portrait a striking body plan change that reached its extreme in snakes. Yet, little is known about the intricate connection between phenotype and genotype in squamates, partly due to limited developmental knowledge and incomplete characterization of embryonic development. Surprisingly, squamate models have received limited attention in comparative developmental studies, and only a few species examined so far can be considered as representative and appropriate model organism for mechanistic Evo-Devo studies. Fortunately, the agamid lizard Pogona vitticeps (central bearded dragon is one of the most popular, domesticated reptile species with both a well-established history in captivity and key advantages for research, thus forming an ideal laboratory model system and justifying his recent use in reptile biology research. We first report here the complete post-oviposition embryonic development for P. vitticeps based on standardized staging systems and external morphological characters previously defined for squamates. Whereas the overall morphological development follows the general trends observed in other squamates, our comparisons indicate major differences in the developmental sequence of several tissues, including early craniofacial characters. Detailed analysis of both embryonic skull development and adult skull shape, using a comparative approach integrating CT-scans and gene expression studies in P. vitticeps as well as comparative embryology and 3D geometric morphometrics in a large dataset of lizards and snakes, highlights

  13. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  14. Textbooks in transitional countries: Towards a methodology for comparative analysis

    Directory of Open Access Journals (Sweden)

    Miha Kovač

    2004-01-01

    Full Text Available In its first part, the paper analyses the ambiguous nature of the book as a medium: its physical production and its distribution to the end user takes place on a market basis; on the other hand, its content is predominantly consumed in a sector that was at least in the continental Europe traditionally considered as public and non-profit making. This ambiguous nature of the book and with it the impact of the market on the organization of knowledge in book format remains a dark spot in contemporary book research. On the other hand, textbooks are considered as ephemera both in contemporary education and book studies. Therefore, research on textbooks publishing models could be considered as a blind-spot of contemporary social studies. As a consequence, in the majority of European countries, textbook publishing and the organization of the textbook market are considered as self-evident. Throughout a comparative analysis of textbook publishing models in small transitional and developed countries, the paper points out that this self-evident organization of the textbook market is always culturally determined. In its final part, the paper compares different models of textbook publishing and outlines the scenarios for the development of the Slovene textbook market.

  15. Comparative molecular analysis of early and late cancer cachexia-induced muscle wasting in mouse models.

    Science.gov (United States)

    Sun, Rulin; Zhang, Santao; Lu, Xing; Hu, Wenjun; Lou, Ning; Zhao, Yan; Zhou, Jia; Zhang, Xiaoping; Yang, Hongmei

    2016-12-01

    Cancer-induced muscle wasting, which commonly occurs in cancer cachexia, is characterized by impaired quality of life and poor patient survival. To identify an appropriate treatment, research on the mechanism underlying muscle wasting is essential. Thus far, studies on muscle wasting using cancer cachectic models have generally focused on early cancer cachexia (ECC), before severe body weight loss occurs. In the present study, we established models of ECC and late cancer cachexia (LCC) and compared different stages of cancer cachexia using two cancer cachectic mouse models induced by colon-26 (C26) adenocarcinoma or Lewis lung carcinoma (LLC). In each model, tumor-bearing (TB) and control (CN) mice were injected with cancer cells and PBS, respectively. The TB and CN mice, which were euthanized on the 24th day or the 36th day after injection, were defined as the ECC and ECC-CN mice or the LCC and LCC-CN mice. In addition, the tissues were harvested and analyzed. We found that both the ECC and LCC mice developed cancer cachexia. The amounts of muscle loss differed between the ECC and LCC mice. Moreover, the expression of some molecules was altered in the muscles from the LCC mice but not in those from the ECC mice compared with their CN mice. In conclusion, the molecules with altered expression in the muscles from the ECC and LCC mice were not exactly the same. These findings may provide some clues for therapy which could prevent the muscle wasting in cancer cachexia from progression to the late stage.

  16. Comparative analysis of insect succession data from Victoria (Australia) using summary statistics versus preceding mean ambient temperature models.

    Science.gov (United States)

    Archer, Mel

    2014-03-01

    Minimum postmortem interval (mPMI) can be estimated with preceding mean ambient temperature models that predict carrion taxon pre-appearance interval. But accuracy has not been compared with using summary statistics (mean ± SD of taxon arrival/departure day, range, 95% CI). This study collected succession data from ten experimental and five control (infrequently sampled) pig carcasses over two summers (n = 2 experimental, n = 1 control per placement date). Linear and exponential preceding mean ambient temperature models for appearance and departure times were constructed for 17 taxa/developmental stages. There was minimal difference in linear or exponential model success, although arrival models were more often significant: 65% of linear arrival (r2 = 0.09–0.79) and exponential arrival models (r2 = 0.05–81.0) were significant, and 35% of linear departure (r2 = 0.0–0.71) and exponential departure models (r2 = 0.0–0.72) were significant. Performance of models and summary statistics for estimating mPMI was compared in two forensic cases. Only summary statistics produced accurate mPMI estimates.

  17. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  18. Static aeroelastic analysis including geometric nonlinearities based on reduced order model

    Directory of Open Access Journals (Sweden)

    Changchuan Xie

    2017-04-01

    Full Text Available This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM. The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/finite element (MFE method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM. Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result.

  19. Energy-Water Modeling and Analysis | Energy Analysis | NREL

    Science.gov (United States)

    Generation (ReEDS Model Analysis) U.S. Energy Sector Vulnerabilities to Climate Change and Extreme Weather Modeling and Analysis Energy-Water Modeling and Analysis NREL's energy-water modeling and analysis vulnerabilities from various factors, including water. Example Projects Renewable Electricity Futures Study

  20. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  1. Comparing ESC and iPSC—Based Models for Human Genetic Disorders

    Directory of Open Access Journals (Sweden)

    Tomer Halevy

    2014-10-01

    Full Text Available Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs from patients’ somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn’t be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  2. Comparing ESC and iPSC-Based Models for Human Genetic Disorders.

    Science.gov (United States)

    Halevy, Tomer; Urbach, Achia

    2014-10-24

    Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs) from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs) from patients' somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn't be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  3. MetaComp: comprehensive analysis software for comparative meta-omics including comparative metagenomics.

    Science.gov (United States)

    Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu

    2017-10-02

    During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis

  4. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  5. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  6. Comparative analysis of the value of national brands

    Directory of Open Access Journals (Sweden)

    Jelena Žugić

    2018-01-01

    Full Text Available Nation branding is not the “holy grail” of economic development, but it can provide a distinct advantage when it is aligned with a well-defined economic strategy and supported by public policy. A nation brand is the sum of people’s perceptions of a country across the most important areas of national competence. This paper examines the value of the nation brand on a sample of 108 countries, using the Anholt Nation Brands Index and using the mathematical formula for calculating the surface of Anholt’s hexagon for each country individually. In this paper, parameters are taken from six areas of the nation hexagon, from the World Bank and the UNESCO database. The surface of the nation hexagon was calculated with mathematical tools and comparative analysis was done between nation brands. By using strategic nation branding models designed by other branding experts in combination with a proposed mathematical model that shows the advantages and disadvantages of the nation brand of each country (and within the country, their competitiveness on the global stage is expected to improve.

  7. Topic Modeling in Sentiment Analysis: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Toqir Ahmad Rana

    2016-06-01

    Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.

  8. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    Directory of Open Access Journals (Sweden)

    Christopher W. Walmsley

    2013-11-01

    Full Text Available Finite element analysis (FEA is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation.Here we report an extensive sensitivity analysis where high resolution finite element (FE models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous, scaling (standardising volume, surface area, or length, tooth position (front, mid, or back tooth engagement, and linear load case (type of loading for each feeding type.Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different

  9. In silico comparative genomic analysis of GABAA receptor transcriptional regulation

    Directory of Open Access Journals (Sweden)

    Joyce Christopher J

    2007-06-01

    Full Text Available Abstract Background Subtypes of the GABAA receptor subunit exhibit diverse temporal and spatial expression patterns. In silico comparative analysis was used to predict transcriptional regulatory features in individual mammalian GABAA receptor subunit genes, and to identify potential transcriptional regulatory components involved in the coordinate regulation of the GABAA receptor gene clusters. Results Previously unreported putative promoters were identified for the β2, γ1, γ3, ε, θ and π subunit genes. Putative core elements and proximal transcriptional factors were identified within these predicted promoters, and within the experimentally determined promoters of other subunit genes. Conserved intergenic regions of sequence in the mammalian GABAA receptor gene cluster comprising the α1, β2, γ2 and α6 subunits were identified as potential long range transcriptional regulatory components involved in the coordinate regulation of these genes. A region of predicted DNase I hypersensitive sites within the cluster may contain transcriptional regulatory features coordinating gene expression. A novel model is proposed for the coordinate control of the gene cluster and parallel expression of the α1 and β2 subunits, based upon the selective action of putative Scaffold/Matrix Attachment Regions (S/MARs. Conclusion The putative regulatory features identified by genomic analysis of GABAA receptor genes were substantiated by cross-species comparative analysis and now require experimental verification. The proposed model for the coordinate regulation of genes in the cluster accounts for the head-to-head orientation and parallel expression of the α1 and β2 subunit genes, and for the disruption of transcription caused by insertion of a neomycin gene in the close vicinity of the α6 gene, which is proximal to a putative critical S/MAR.

  10. Model-based safety analysis of a control system using Simulink and Simscape extended models

    Directory of Open Access Journals (Sweden)

    Shao Nian

    2017-01-01

    Full Text Available The aircraft or system safety assessment process is an integral part of the overall aircraft development cycle. It is usually characterized by a very high timely and financial effort and can become a critical design driver in certain cases. Therefore, an increasing demand of effective methods to assist the safety assessment process arises within the aerospace community. One approach is the utilization of model-based technology, which is already well-established in the system development, for safety assessment purposes. This paper mainly describes a new tool for Model-Based Safety Analysis. A formal model for an example system is generated and enriched with extended models. Then, system safety analyses are performed on the model with the assistance of automation tools and compared to the results of a manual analysis. The objective of this paper is to improve the increasingly complex aircraft systems development process. This paper develops a new model-based analysis tool in Simulink/Simscape environment.

  11. The Generational Change in Family Businesses: Comparative Analysis between Italy and Peru

    Directory of Open Access Journals (Sweden)

    César Cáceres Dagnino

    2017-07-01

    Full Text Available The aim of this paper is to understand how family firms in Italy and Peru prepare for generational change, by comparing three companies from each of these countries. After a theoretical analysis, having examined and compared the literature to define the family business, the business family and the generational change, an empirical analysis has been made using a quantitative survey (STEP 2013-2014 and its model, which makes a revision of a set of constructs, to identify if there is transgenerational potential in the business families. From the comparison of the six companies it appears that, contrary to what was initially thought, there are no such relevant differences. They are only diverse approaches to different problems of the same phenomenon. It is concluded that the six companies have an adequate transgenerational potential and are ready for a successful generational change.

  12. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  13. Effective comparative analysis of protein-protein interaction networks by measuring the steady-state network flow using a Markov model.

    Science.gov (United States)

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2016-10-06

    Comparative analysis of protein-protein interaction (PPI) networks provides an effective means of detecting conserved functional network modules across different species. Such modules typically consist of orthologous proteins with conserved interactions, which can be exploited to computationally predict the modules through network comparison. In this work, we propose a novel probabilistic framework for comparing PPI networks and effectively predicting the correspondence between proteins, represented as network nodes, that belong to conserved functional modules across the given PPI networks. The basic idea is to estimate the steady-state network flow between nodes that belong to different PPI networks based on a Markov random walk model. The random walker is designed to make random moves to adjacent nodes within a PPI network as well as cross-network moves between potential orthologous nodes with high sequence similarity. Based on this Markov random walk model, we estimate the steady-state network flow - or the long-term relative frequency of the transitions that the random walker makes - between nodes in different PPI networks, which can be used as a probabilistic score measuring their potential correspondence. Subsequently, the estimated scores can be used for detecting orthologous proteins in conserved functional modules through network alignment. Through evaluations based on multiple real PPI networks, we demonstrate that the proposed scheme leads to improved alignment results that are biologically more meaningful at reduced computational cost, outperforming the current state-of-the-art algorithms. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/CUFID .

  14. Comparative analysis of hydraulic crane-manipulating installations transport and technological machines and industrial robots hydraulic manipulators

    Directory of Open Access Journals (Sweden)

    Lagerev I.A.

    2016-09-01

    Full Text Available The article presents results of comparative analysis of hydraulic crane-manipulator installations of mobile transport and technological machines and hydraulic manipulators of industrial robots. The comparative analysis is based on consid-eration of a wide range of types and sizes indicated technical devices of both domestic and foreign production: 1580 structures of cranes and more than 450 structures of industrial robots. It was performed in the following areas: func-tional purpose and basic technical characteristics; a design; the loading conditions of the model and failures in operation process; approaches to the design, calculation methods and mathematical modeling. The conclusions about the degree of similarity and the degree of difference hydraulic crane-manipulator installations of transport and technological ma-chines and hydraulic industrial robot manipulators from the standpoint of their design and modeling occurring in them during operation of dynamic and structural processes.

  15. High Altitude Long Endurance UAV Analysis Model Development and Application Study Comparing Solar Powered Airplane and Airship Station-Keeping Capabilities

    Science.gov (United States)

    Ozoroski, Thomas A.; Nickol, Craig L.; Guynn, Mark D.

    2015-01-01

    There have been ongoing efforts in the Aeronautics Systems Analysis Branch at NASA Langley Research Center to develop a suite of integrated physics-based computational utilities suitable for modeling and analyzing extended-duration missions carried out using solar powered aircraft. From these efforts, SolFlyte has emerged as a state-of-the-art vehicle analysis and mission simulation tool capable of modeling both heavier-than-air (HTA) and lighter-than-air (LTA) vehicle concepts. This study compares solar powered airplane and airship station-keeping capability during a variety of high altitude missions, using SolFlyte as the primary analysis component. Three Unmanned Aerial Vehicle (UAV) concepts were designed for this study: an airplane (Operating Empty Weight (OEW) = 3285 kilograms, span = 127 meters, array area = 450 square meters), a small airship (OEW = 3790 kilograms, length = 115 meters, array area = 570 square meters), and a large airship (OEW = 6250 kilograms, length = 135 meters, array area = 1080 square meters). All the vehicles were sized for payload weight and power requirements of 454 kilograms and 5 kilowatts, respectively. Seven mission sites distributed throughout the United States were selected to provide a basis for assessing the vehicle energy budgets and site-persistent operational availability. Seasonal, 30-day duration missions were simulated at each of the sites during March, June, September, and December; one-year duration missions were simulated at three of the sites. Atmospheric conditions during the simulated missions were correlated to National Climatic Data Center (NCDC) historical data measurements at each mission site, at four flight levels. Unique features of the SolFlyte model are described, including methods for calculating recoverable and energy-optimal flight trajectories and the effects of shadows on solar energy collection. Results of this study indicate that: 1) the airplane concept attained longer periods of on

  16. Comparative analysis of hourly and dynamic power balancing models for validating future energy scenarios

    DEFF Research Database (Denmark)

    Pillai, Jayakrishnan R.; Heussen, Kai; Østergaard, Poul Alberg

    2011-01-01

    Energy system analyses on the basis of fast and simple tools have proven particularly useful for interdisciplinary planning projects with frequent iterations and re-evaluation of alternative scenarios. As such, the tool “EnergyPLAN” is used for hourly balanced and spatially aggregate annual......, the model is verified on the basis of the existing energy mix on Bornholm as an islanded energy system. Future energy scenarios for the year 2030 are analysed to study a feasible technology mix for a higher share of wind power. Finally, the results of the hourly simulations are compared to dynamic frequency...... simulations incorporating the Vehicle-to-grid technology. The results indicate how the EnergyPLAN model may be improved in terms of intra-hour variability, stability and ancillary services to achieve a better reflection of energy and power capacity requirements....

  17. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  18. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  19. Comparative analysis on the probability of being a good payer

    Science.gov (United States)

    Mihova, V.; Pavlov, V.

    2017-10-01

    Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as

  20. Cryogenic Fuel Tank Draining Analysis Model

    Science.gov (United States)

    Greer, Donald

    1999-01-01

    One of the technological challenges in designing advanced hypersonic aircraft and the next generation of spacecraft is developing reusable flight-weight cryogenic fuel tanks. As an aid in the design and analysis of these cryogenic tanks, a computational fluid dynamics (CFD) model has been developed specifically for the analysis of flow in a cryogenic fuel tank. This model employs the full set of Navier-Stokes equations, except that viscous dissipation is neglected in the energy equation. An explicit finite difference technique in two-dimensional generalized coordinates, approximated to second-order accuracy in both space and time is used. The stiffness resulting from the low Mach number is resolved by using artificial compressibility. The model simulates the transient, two-dimensional draining of a fuel tank cross section. To calculate the slosh wave dynamics the interface between the ullage gas and liquid fuel is modeled as a free surface. Then, experimental data for free convection inside a horizontal cylinder are compared with model results. Finally, cryogenic tank draining calculations are performed with three different wall heat fluxes to demonstrate the effect of wall heat flux on the internal tank flow field.

  1. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  2. Modelling optimization involving different types of elements in finite element analysis

    International Nuclear Information System (INIS)

    Wai, C M; Rivai, Ahmad; Bapokutty, Omar

    2013-01-01

    Finite elements are used to express the mechanical behaviour of a structure in finite element analysis. Therefore, the selection of the elements determines the quality of the analysis. The aim of this paper is to compare and contrast 1D element, 2D element, and 3D element used in finite element analysis. A simple case study was carried out on a standard W460x74 I-beam. The I-beam was modelled and analyzed statically with 1D elements, 2D elements and 3D elements. The results for the three separate finite element models were compared in terms of stresses, deformation and displacement of the I-beam. All three finite element models yield satisfactory results with acceptable errors. The advantages and limitations of these elements are discussed. 1D elements offer simplicity although lacking in their ability to model complicated geometry. 2D elements and 3D elements provide more detail yet sophisticated results which require more time and computer memory in the modelling process. It is also found that the choice of element in finite element analysis is influence by a few factors such as the geometry of the structure, desired analysis results, and the capability of the computer

  3. COMPARATIVE ANALYSIS BETWEEN THE TRADITIONAL MODEL OF CORPORATE GOVERNANCE AND ISLAMIC MODEL

    Directory of Open Access Journals (Sweden)

    DAN ROXANA LOREDANA

    2016-08-01

    Full Text Available Corporate governance represents a set of processes and policies by which a company is administered, controlled and directed to achieve the predetermined management objectives settled by the shareholders. The most important benefits of the corporate governance to the organisations are related to business success, investor confidence and minimisation of wastage. For business, the improved controls and decision-making will aid corporate success as well as growth in revenues and profits. For the investor confidence, corporate governance will mean that investors are more likely to trust that the company is being well run. This will not only make it easier and cheaper for the company to raise finance, but also has a positive effect on the share price. When we talk about the minimisation of wastage we relate to the strong corporate governance that should help to minimise waste within the organisation, as well as the corruption, risks and mismanagement. Thus, in our research, we are trying to determine the common elements, and also, the differences that have occured between two well known models of corporate governance, the traditional Anglo – Saxon model and also, the Islamic model of corporate governance.

  4. Measuring populism: comparing two methods of content analysis

    NARCIS (Netherlands)

    Rooduijn, M.; Pauwels, T.

    2011-01-01

    The measurement of populism - particularly over time and space - has received only scarce attention. In this research note two different ways to measure populism are compared: a classical content analysis and a computer-based content analysis. An analysis of political parties in the United Kingdom,

  5. Analysis of the resolution processes of three modeling tasks

    Directory of Open Access Journals (Sweden)

    Cèsar Gallart Palau

    2017-08-01

    Full Text Available In this paper we present a comparative analysis of the resolution process of three modeling tasks performed by secondary education students (13-14 years, designed from three different points of view: The Modelling-eliciting Activities, the LEMA project, and the Realistic Mathematical Problems. The purpose of this analysis is to obtain a methodological characterization of them in order to provide to secondary education teachers a proper selection and sequencing of tasks for their implementation in the classroom.

  6. Diagnosing MOV problems using comparative trace analysis

    International Nuclear Information System (INIS)

    Leon, R.L.

    1992-01-01

    The paper presents the concept of comparative trace analysis and shows it to be very effective in diagnosing motor operated valve (MOV) problems. Comparative trace analysis is simply the process of interpreting simultaneously gathered traces, each presenting a different perspective on the same series of events. The opening and closing of a motor operated valve is such a series of events. The simultaneous traces are obtained using Liberty Technologies' Valve Operation Test and Evaluation System (VOTES)reg-sign. The traces include stem thrust, motor current, motor power factor, motor power, switch actuations, vibration in three different frequency bands, spring pack displacement, and spring pack force. Spare and auxiliary channels enable additional key parameters to be measured, such as differential pressure and stem displacement. Though not specifically illustrated in this paper, the VOTES system also provides for FFT analysis on all traces except switches

  7. Mathematical annuity models application in cash flow analysis ...

    African Journals Online (AJOL)

    Mathematical annuity models application in cash flow analysis. ... We also compare the cost efficiency between Amortisation and Sinking fund loan repayment as prevalent in financial institutions. Keywords: Annuity, Amortisation, Sinking Fund, Present and Future Value Annuity, Maturity date and Redemption value.

  8. A comparative analysis of currently used microscopic and macroscopic traffic simulation software

    International Nuclear Information System (INIS)

    Ratrout Nedal T; Rahman Syed Masiur

    2009-01-01

    The significant advancements of information technology have contributed to increased development of traffic simulation models. These include microscopic models and broadening the areas of applications ranging from the modeling of specific components of the transportation system to a whole network having different kinds of intersections and links, even in a few cases combining travel demand models. This paper mainly reviews the features of traditionally used macroscopic and microscopic traffic simulation models along with a comparative analysis focusing on freeway operations, urban congested networks, project-level emission modeling, and variations in delay and capacity estimates. The models AIMSUN, CORSIM, and VISSIM are found to be suitable for congested arterials and freeways, and integrated networks of freeways and surface streets. The features of AIMSUN are favorable for creating large urban and regional networks. The models AIMSUN, PARAMICS, INTEGRATION, and CORSIM are potentially useful for Intelligent Transportation System (ITS). There are a few simulation models which are developed focusing on ITS such as MITSIMLab. The TRAF-family and HUTSIM models attempt a system-level simulation approach and develop open environments where several analysis models can be used interactively to solve traffic simulation problems. In Saudi Arabia, use of simulation software with the capability of analyzing an integrated system of freeways and surface streets has not been reported. Calibration and validation of simulation software either for freeways or surface streets has been reported. This paper suggests that researchers evaluate the state-of-the-art simulation tools and find out the suitable tools or approaches for the local conditions of Saudi Arabia. (author)

  9. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  10. Comparative Analysis Of Three Largest World Models Of Business Excellence

    Directory of Open Access Journals (Sweden)

    Jasminka Samardžija

    2009-07-01

    Full Text Available Business excellence has become the strongest means of achieving competitive advantage of companies while total management of quality has become the road that ensures support of excellent results recognized by many world companies. Despite many differences, we can conclude that models have many common elements. By the audit in 2005, the DP and MBNQA moved the focus from excellence of product, i.e service, onto the excellence of quality of the entire organization process. Thus, the quality got strategic dimension instead of technical one and the accent passed from the technical quality on the total excellence of all organization processes. The joint movement goes to the direction of good management and appreciation of systems thinking. The very structure of EFOM model criteria itself is adjusted to strategic dimension of quality and that is why the model underwent only short audits within the criteria themselves. Essentially, the model remained unchanged. In all models, the accent is on the satisfaction of buyers, employees and community. National rewards for quality have an important role in promotion and giving a prize to excellence in organization performances. Moreover, they raise quality standards of companies and the country profile as a whole. Considering the GDP per capita and the percentage of certification level of companies, Croatia has all the predispositions for introduction the EFQM model of business excellence with the basic aim of deficit decrease in foreign trade balance and strengthening of competitiveness as the necessary preliminary work for the entrance in the competitive market of the EU. Quality management was introduced in many organizations. The methods used at that time developed in the course of years, and what are to predict is the continuation of the evolution road model as well as the method of business excellence.

  11. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  12. WORKABILITY OF A MANAGEMENT CONTROL MODEL IN SERVICE ORGANIZATIONS: A COMPARATIVE STUDY OF REACTIVE, PROACTIVE AND COACTIVE PHILOSOPHIES

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    2006-11-01

    Full Text Available The main objective of this study was to compare and contrast the three philosophies of management control models in the process of decision-making, namely reactive, proactive and the coactive. Research methodology was based on literature review and descriptive/exploratory approach. Additionally, a survey of 20 service organizations was carried out in order to make the analysis wider-reaching. In order to do that, the following steps were followed: firstly, fundamentals of reactive, proactive and coactive models were highlighted; secondly, management behaviors in the three approaches were compared, with concepts and their practical application being highlighted, thus retrieving managerial relationships in the organization. In so doing, we draw the hypothesis that middle and top managers who adopt control models that are distant from a more coactive one, usually spend a greater number of working hours in problem-solving, leaving little or no time for planning purposes. Finally, for study consolidation purpose, we have adopted qualitative data collection, whereby a content analysis was carried out with the assistance of six categories. Results have shown the need for a change in management paradigms so that firms are not only compared through financial perspectives, without considering the analysis of management control models which, according to this study, directly influence the operational results of the organizations.

  13. Government Debt Reduction in the USA and Greece: A Comparative VECM Analysis

    Directory of Open Access Journals (Sweden)

    Gisele MAH

    2016-11-01

    Full Text Available The purpose of this paper is to estimate comparative debt reduction models for the USA and Greece using Vector Error Correction Model analysis and Granger causality test. The study provides an empirical framework that could assist in policy formulation for countries with high debt rates as well as those experiencing debt crises. The US model revealed a negative and significant relationship between general government debt and inflation as well as negative significance with primary balance. In Greece, the relationship between general government debts with primary balance is found to be positive and significant while negative and significant with net transfer from abroad. Granger causality is from general government debts to inflation in the USA and from primary balance to general government debts in Greece.

  14. Critical Analysis of Underground Coal Gasification Models. Part II: Kinetic and Computational Fluid Dynamics Models

    Directory of Open Access Journals (Sweden)

    Alina Żogała

    2014-01-01

    Originality/value: This paper presents state of art in the field of coal gasification modeling using kinetic and computational fluid dynamics approach. The paper also presents own comparative analysis (concerned with mathematical formulation, input data and parameters, basic assumptions, obtained results etc. of the most important models of underground coal gasification.

  15. Comparative analysis of the development of franchising in Serbia and worldwide

    OpenAIRE

    Stefanović, Suzana; Stanković, Milica

    2013-01-01

    The research of franchising as a business model is of great importance for the further development of this concept. At the global level, there is a constant tendency of development of existing and new franchise systems. This area is still unexplored in Serbia. The aim of the paper is to point out the importance of franchising on the global level and the need for more intensive development in Serbia, based on a comparative analysis of development of this concept in Serbia and worldwide. The pa...

  16. Comparative analysis of the development of franchising in Serbia and worldwide

    Directory of Open Access Journals (Sweden)

    Stefanović Suzana

    2013-01-01

    Full Text Available The research of franchising as a business model is of great importance for the further development of this concept. At the global level, there is a constant tendency of development of existing and new franchise systems. This area is still unexplored in Serbia. The aim of the paper is to point out the importance of franchising on the global level and the need for more intensive development in Serbia, based on a comparative analysis of development of this concept in Serbia and worldwide. The paper presents the development of franchising as a business concept in the modern economy, with special focus on fundamental characteristics, advantages and disadvantages of franchising systems. Analysis of the franchise worldwide and in Serbia indicates that franchising is still insufficiently regulated by the law in Serbia. For the purpose of comparative analysis of the best known franchise systems in the developed market economies and Serbia, we start from the representative examples of franchise concepts in the United States, Great Britain and Serbia. The paper indicates that a small number of domestic franchise is internationalized, based on a review of selected franchise systems and the presence of foreign franchises in Serbia and Serbian franchise all over the world.

  17. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  18. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    Science.gov (United States)

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  19. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  20. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  1. National Launch System comparative economic analysis

    Science.gov (United States)

    Prince, A.

    1992-01-01

    Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.

  2. Net energy analysis in a Ramsey–Hotelling growth model

    International Nuclear Information System (INIS)

    Macías, Arturo; Matilla-García, Mariano

    2015-01-01

    This article presents a dynamic growth model with energy as an input in the production function. The available stock of energy resources is ordered by a quality parameter based on energy accounting: the “Energy Return on Energy Invested” (EROI). In our knowledge this is the first paper where EROI fits in a neoclassical growth model (with individual utility maximization and market equilibrium), establishing the economic use of “net energy analysis” on a firmer theoretical ground. All necessary concepts to link neoclassical economics and EROI are discussed before their use in the model, and a comparative static analysis of the steady states of a simplified version of the model is presented. - Highlights: • A neoclassical growth model with EROI (“Energy Return on Energy Invested”) is shown • All concepts linking neoclassical economics and net energy analysis are discussed • Any EROI decline can be compensated increasing gross activity in the energy sector. • The economic impact of EROI depends on some non-energy cost in the energy sector. • Comparative steady-state statics for different EROI levels is performed and discussed. • Policy implications are suggested.

  3. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  4. Comparative Analysis of Context-Dependent Mutagenesis Using Human and Mouse Models

    Directory of Open Access Journals (Sweden)

    Sofya A. Medvedeva

    2013-01-01

    Full Text Available Substitution rates strongly depend on their nucleotide context. One of the most studied examples is the excess of C > T mutations in the CG context in various groups of organisms, including vertebrates. Studies on the molecular mechanisms underlying this mutation regularity have provided insights into evolution, mutagenesis, and cancer development. Recently several other hypermutable motifs were identified in the human genome. There is an increased frequency of T > C mutations in the second position of the words ATTG and ATAG and an increased frequency of A > C mutations in the first position of the word ACAA. For a better understanding of evolution, it is of interest whether these mutation regularities are human specific or present in other vertebrates, as their presence might affect the validity of currently used substitution models and molecular clocks. A comprehensive analysis of mutagenesis in 4 bp mutation contexts requires a vast amount of mutation data. Such data may be derived from the comparisons of individual genomes or from single nucleotide polymorphism (SNP databases. Using this approach, we performed a systematical comparison of mutation regularities within 2–4 bp contexts in Mus musculus and Homo sapiens and uncovered that even closely related organisms may have notable differences in context-dependent mutation regularities.

  5. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  6. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    Science.gov (United States)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  7. Comparative Genome Analysis of Enterobacter cloacae

    Science.gov (United States)

    Liu, Wing-Yee; Wong, Chi-Fat; Chung, Karl Ming-Kar; Jiang, Jing-Wei; Leung, Frederick Chi-Ching

    2013-01-01

    The Enterobacter cloacae species includes an extremely diverse group of bacteria that are associated with plants, soil and humans. Publication of the complete genome sequence of the plant growth-promoting endophytic E. cloacae subsp. cloacae ENHKU01 provided an opportunity to perform the first comparative genome analysis between strains of this dynamic species. Examination of the pan-genome of E. cloacae showed that the conserved core genome retains the general physiological and survival genes of the species, while genomic factors in plasmids and variable regions determine the virulence of the human pathogenic E. cloacae strain; additionally, the diversity of fimbriae contributes to variation in colonization and host determination of different E. cloacae strains. Comparative genome analysis further illustrated that E. cloacae strains possess multiple mechanisms for antagonistic action against other microorganisms, which involve the production of siderophores and various antimicrobial compounds, such as bacteriocins, chitinases and antibiotic resistance proteins. The presence of Type VI secretion systems is expected to provide further fitness advantages for E. cloacae in microbial competition, thus allowing it to survive in different environments. Competition assays were performed to support our observations in genomic analysis, where E. cloacae subsp. cloacae ENHKU01 demonstrated antagonistic activities against a wide range of plant pathogenic fungal and bacterial species. PMID:24069314

  8. 1991 comparative analysis of tritium in water

    International Nuclear Information System (INIS)

    Krause, W.J.; Mundschenk, H.

    1992-06-01

    For environmental monitoring of radioactive materials, the competent authorities of the States and Federal Government of Germany continuously perform measurements and make their results accessible to the public in an appropriate way. In order to guarantee the comparability of measured values and a high degree of reliability of the applied methods, the authorities in charge of carrying out such tasks are obliged to take part in the comparative analyses (ring tests) organized by the central offices of the Federal Government. Therefore, the aim of this comparative analysis performed by order of the Federal Ministry of the Environment, Nature Protection and Reactor Safety consists mainly in providing the measuring offices in charge of monitoring waters, with samples with known tritium contents in order to get an overview of the accuracy of currently used processes; check the accuracy of the determinations performed, and, if necessary, detect and eliminate systematic errors; check, in particular by means of the samples T2 and T3, the calibration of the measuring devices and, if necessary, make corrections. To this effect, the comparative analysis fulfills the function of quality control of the processes used in environmental monitoring. (orig./BBR) [de

  9. Development and assessment of multi-dimensional flow model in MARS compared with the RPI air-water experiment

    International Nuclear Information System (INIS)

    Lee, Seok Min; Lee, Un Chul; Bae, Sung Won; Chung, Bub Dong

    2004-01-01

    The Multi-Dimensional flow models in system code have been developed during the past many years. RELAP5-3D, CATHARE and TRACE has its specific multi-dimensional flow models and successfully applied it to the system safety analysis. In KAERI, also, MARS(Multi-dimensional Analysis of Reactor Safety) code was developed by integrating RELAP5/MOD3 code and COBRA-TF code. Even though COBRA-TF module can analyze three-dimensional flow models, it has a limitation to apply 3D shear stress dominant phenomena or cylindrical geometry. Therefore, Multi-dimensional analysis models are newly developed by implementing three-dimensional momentum flux and diffusion terms. The multi-dimensional model has been assessed compared with multi-dimensional conceptual problems and CFD code results. Although the assessment results were reasonable, the multi-dimensional model has not been validated to two-phase flow using experimental data. In this paper, the multi-dimensional air-water two-phase flow experiment was simulated and analyzed

  10. The digital storytelling process: A comparative analysis from various experts

    Science.gov (United States)

    Hussain, Hashiroh; Shiratuddin, Norshuhada

    2016-08-01

    Digital Storytelling (DST) is a method of delivering information to the audience. It combines narrative and digital media content infused with the multimedia elements. In order for the educators (i.e the designers) to create a compelling digital story, there are sets of processes introduced by experts. Nevertheless, the experts suggest varieties of processes to guide them; of which some are redundant. The main aim of this study is to propose a single guide process for the creation of DST. A comparative analysis is employed where ten DST models from various experts are analysed. The process can also be implemented in other multimedia materials that used the concept of DST.

  11. COMPARATIVE ANALYSIS OF VAT EVOLUTION IN THE EUROPEAN ECONOMIC SYSTEM

    Directory of Open Access Journals (Sweden)

    MIHAELA ANDREEA STROE

    2011-04-01

    Full Text Available In this paper we study a comparative analysis of VAT in different states of the world. I made some observation on this theme because I believe that VAT is very important in carrying out transactions and the increase or decrease of this tax has a major impact upon national economies and also on the quality of life in developing countries. The papers has to pourpose to make a comparison between the American and European system of taxation with its advantages and disadvantages and, in the end to render an economic model and its statistics components. VAT is a value added tax which appeared about 50 years, initially with two purposes: one to replace certain indirect taxes, and another to reduce the budget deficit according to the faith of that time. The first country that has adopted this model was France, calling it today as value-added tax.

  12. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    Science.gov (United States)

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  13. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Initial implementation of a comparative data analysis ontology.

    Science.gov (United States)

    Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin

    2009-07-03

    Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  15. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  16. Assessing Heterogeneity for Factor Analysis Model with Continuous and Ordinal Outcomes

    Directory of Open Access Journals (Sweden)

    Ye-Mao Xia

    2016-01-01

    Full Text Available Factor analysis models with continuous and ordinal responses are a useful tool for assessing relations between the latent variables and mixed observed responses. These models have been successfully applied to many different fields, including behavioral, educational, and social-psychological sciences. However, within the Bayesian analysis framework, most developments are constrained within parametric families, of which the particular distributions are specified for the parameters of interest. This leads to difficulty in dealing with outliers and/or distribution deviations. In this paper, we propose a Bayesian semiparametric modeling for factor analysis model with continuous and ordinal variables. A truncated stick-breaking prior is used to model the distributions of the intercept and/or covariance structural parameters. Bayesian posterior analysis is carried out through the simulation-based method. Blocked Gibbs sampler is implemented to draw observations from the complicated posterior. For model selection, the logarithm of pseudomarginal likelihood is developed to compare the competing models. Empirical results are presented to illustrate the application of the methodology.

  17. Inverse Analysis and Modeling for Tunneling Thrust on Shield Machine

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2013-01-01

    Full Text Available With the rapid development of sensor and detection technologies, measured data analysis plays an increasingly important role in the design and control of heavy engineering equipment. The paper proposed a method for inverse analysis and modeling based on mass on-site measured data, in which dimensional analysis and data mining techniques were combined. The method was applied to the modeling of the tunneling thrust on shield machines and an explicit expression for thrust prediction was established. Combined with on-site data from a tunneling project in China, the inverse identification of model coefficients was carried out using the multiple regression method. The model residual was analyzed by statistical methods. By comparing the on-site data and the model predicted results in the other two projects with different tunneling conditions, the feasibility of the model was discussed. The work may provide a scientific basis for the rational design and control of shield tunneling machines and also a new way for mass on-site data analysis of complex engineering systems with nonlinear, multivariable, time-varying characteristics.

  18. Comparative analysis on the selection of number of clusters in community detection

    Science.gov (United States)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  19. A comparative analysis of biclustering algorithms for gene expression data

    Science.gov (United States)

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  20. Static response of deformable microchannels: a comparative modelling study

    Science.gov (United States)

    Shidhore, Tanmay C.; Christov, Ivan C.

    2018-02-01

    We present a comparative modelling study of fluid-structure interactions in microchannels. Through a mathematical analysis based on plate theory and the lubrication approximation for low-Reynolds-number flow, we derive models for the flow rate-pressure drop relation for long shallow microchannels with both thin and thick deformable top walls. These relations are tested against full three-dimensional two-way-coupled fluid-structure interaction simulations. Three types of microchannels, representing different elasticity regimes and having been experimentally characterized previously, are chosen as benchmarks for our theory and simulations. Good agreement is found in most cases for the predicted, simulated and measured flow rate-pressure drop relationships. The numerical simulations performed allow us to also carefully examine the deformation profile of the top wall of the microchannel in any cross section, showing good agreement with the theory. Specifically, the prediction that span-wise displacement in a long shallow microchannel decouples from the flow-wise deformation is confirmed, and the predicted scaling of the maximum displacement with the hydrodynamic pressure and the various material and geometric parameters is validated.

  1. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  2. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    Science.gov (United States)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  3. Comparing the Cognitive Process of Circular Causality in Two Patients with Strokes through Qualitative Analysis.

    Science.gov (United States)

    Derakhshanrad, Seyed Alireza; Piven, Emily; Ghoochani, Bahareh Zeynalzadeh

    2017-10-01

    Walter J. Freeman pioneered the neurodynamic model of brain activity when he described the brain dynamics for cognitive information transfer as the process of circular causality at intention, meaning, and perception (IMP) levels. This view contributed substantially to establishment of the Intention, Meaning, and Perception Model of Neuro-occupation in occupational therapy. As described by the model, IMP levels are three components of the brain dynamics system, with nonlinear connections that enable cognitive function to be processed in a circular causality fashion, known as Cognitive Process of Circular Causality (CPCC). Although considerable research has been devoted to study the brain dynamics by sophisticated computerized imaging techniques, less attention has been paid to study it through investigating the adaptation process of thoughts and behaviors. To explore how CPCC manifested thinking and behavioral patterns, a qualitative case study was conducted on two matched female participants with strokes, who were of comparable ages, affected sides, and other characteristics, except for their resilience and motivational behaviors. CPCC was compared by matrix analysis between two participants, using content analysis with pre-determined categories. Different patterns of thinking and behavior may have happened, due to disparate regulation of CPCC between two participants.

  4. A comparative study of 3D FZI and electrofacies modeling using seismic attribute analysis and neural network technique: A case study of Cheshmeh-Khosh Oil field in Iran

    Directory of Open Access Journals (Sweden)

    Mahdi Rastegarnia

    2016-09-01

    Full Text Available Electrofacies are used to determine reservoir rock properties, especially permeability, to simulate fluid flow in porous media. These are determined based on classification of similar logs among different groups of logging data. Data classification is accomplished by different statistical analysis such as principal component analysis, cluster analysis and differential analysis. The aim of this study is to predict 3D FZI (flow zone index and Electrofacies (EFACT volumes from a large volume of 3D seismic data. This study is divided into two parts. In the first part of the study, in order to make the EFACT model, nuclear magnetic resonance (NMR log parameters were employed for developing an Electrofacies diagram based on pore size distribution and porosity variations. Then, a graph-based clustering method, known as multi resolution graph-based clustering (MRGC, was employed to classify and obtain the optimum number of Electrofacies. Seismic attribute analysis was then applied to model each relaxation group in order to build the initial 3D model which was used to reach the final model by applying Probabilistic Neural Network (PNN. In the second part of the study, the FZI 3D model was created by multi attributes technique. Then, this model was improved by three different artificial intelligence systems including PNN, multilayer feed-forward network (MLFN and radial basis function network (RBFN. Finally, models of FZI and EFACT were compared. Results obtained from this study revealed that the two models are in good agreement and PNN method is successful in modeling FZI and EFACT from 3D seismic data for which no Stoneley data or NMR log data are available. Moreover, they may be used to detect hydrocarbon-bearing zones and locate the exact place for producing wells for the future development plans. In addition, the result provides a geologically realistic spatial FZI and reservoir facies distribution which helps to understand the subsurface reservoirs

  5. CORPORATE FINANCIAL DISTRESS AND BANKRUPTCY: A COMPARATIVE ANALYSIS IN FRANCE, ITALY AND SPAIN

    Directory of Open Access Journals (Sweden)

    Alessandra Amendola

    2013-11-01

    Full Text Available The paper presents a competing-risks approach for investigating the determinants of corporate financial distress. In particular a comparative analysis of three European markets-France, Italy and Spain–is performed in order to find out the similarities and the differences in the determinants of distress.By using the AMADEUS dataset, two possible causes of exit from the market are considered:bankruptcy and liquidation. For identifying the variables that influence the risk of leaving the market,a competing-risks model for each country is estimated and is compared with a pooled model including all the three countries. In addition, the performance of the competing-risks approach is evaluated versus the single-risk model, in which all states are considered without any distinctions.The reached results show that the competing risks approach leads to a saving in the number of selected variables that becomes more significant when the model is estimated for each country separately. Moreover, the selected variables for each country enable to identify similarities between the different exit routes across the markets. Some of the differences between Spain and the other two countries may be related to the dissimilar definition of the distress states.

  6. Development of a CANDU Moderator Analysis Model; Based on Coupled Solver

    International Nuclear Information System (INIS)

    Yoon, Churl; Park, Joo Hwan

    2006-01-01

    A CFD model for predicting the CANDU-6 moderator temperature has been developed for several years in KAERI, which is based on CFX-4. This analytic model(CFX4-CAMO) has some strength in the modeling of hydraulic resistance in the core region and in the treatment of heat source term in the energy equations. But the convergence difficulties and slow computing speed reveal to be the limitations of this model, because the CFX-4 code adapts a segregated solver to solve the governing equations with strong coupled-effect. Compared to CFX-4 using segregated solver, CFX-10 adapts high efficient and robust coupled-solver. Before December 2005 when CFX-10 was distributed, the previous version of CFX-10(CFX-5. series) also adapted coupled solver but didn't have any capability to apply porous media approaches correctly. In this study, the developed moderator analysis model based on CFX- 4 (CFX4-CAMO) is transformed into a new moderator analysis model based on CFX-10. The new model is examined and the results are compared to the former

  7. Do Breast Implants Influence Breastfeeding? A Meta-Analysis of Comparative Studies.

    Science.gov (United States)

    Cheng, Fengrui; Dai, Shuiping; Wang, Chiyi; Zeng, Shaoxue; Chen, Junjie; Cen, Ying

    2018-06-01

    Aesthetic breast implant augmentation surgery is the most popular plastic surgery worldwide. Many women choose to receive breast implants during their reproductive ages, although the long-term effects are still controversial. Research aim: We conducted a meta-analysis to assess the influence of aesthetic breast augmentation on breastfeeding. We also compared the exclusive breastfeeding rates of periareolar versus inframammary incision. A systematic search for comparative studies about breast implants and breastfeeding was performed in PubMed, MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, ScienceDirect, Scopus, and Web of Science through May 2018. Meta-analysis was conducted with a random-effects model (or fixed effects, if heterogeneity was absent). Four cohorts and one cross-sectional study were included. There was a significant reduction in the exclusive breastfeeding rate for women with breast implants compared with women without implants, pooled relative risk = 0.63, 95% confidence interval [0.46, 0.86], as well as the breastfeeding rate, pooled relative risk = 0.88, 95% confidence interval [0.81, 0.95]. There was no evidence that periareolar incision was associated with a reduction in the exclusive breastfeeding rate, pooled relative risk = 0.84, 95% confidence interval [0.45, 1.58]. Participants with breast implants are less likely to establish breastfeeding, especially exclusive breastfeeding. Periareolar incision does not appear to reduce the exclusive breastfeeding rate.

  8. Modelling of pesticide emissions for Life Cycle Inventory analysis: Model development, applications and implications

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes

    with variations in the climates and soils present in Europe. Emissions of pesticides to surface water and groundwater calculated by PestLCI 2.0 were compared with models used for risk assessment. Compared to the MACRO module in SWASH 3.1 model, which calculates surface water emissions by runoff and drainage...... chromatographic flow of water through the soil), which was attributed to the omission of emissions via macropore flow in the latter model. The comparison was complicated by the fact that the scenarios used were not fully identical. In order to quantify the implications of using PestLCI 2.0, human toxicity......The work presented in this thesis deals with quantification of pesticide emissions in the Life Cycle Inventory (LCI) analysis phase of Life Cycle Assessment (LCA). The motivation to model pesticide emissions is that reliable LCA results not only depend on accurate impact assessment models, but also...

  9. Comparative Analysis of 37 Acinetobacter Bacteriophages

    Directory of Open Access Journals (Sweden)

    Dann Turner

    2017-12-01

    Full Text Available Members of the genus Acinetobacter are ubiquitous in the environment and the multiple-drug resistant species A. baumannii is of significant clinical concern. This clinical relevance is currently driving research on bacterial viruses infecting A. baumannii, in an effort to implement phage therapy and phage-derived antimicrobials. Initially, a total of 42 Acinetobacter phage genome sequences were available in the international nucleotide sequence databases, corresponding to a total of 2.87 Mbp of sequence information and representing all three families of the order Caudovirales and a single member of the Leviviridae. A comparative bioinformatics analysis of 37 Acinetobacter phages revealed that they form six discrete clusters and two singletons based on genomic organisation and nucleotide sequence identity. The assignment of these phages to clusters was further supported by proteomic relationships established using OrthoMCL. The 4067 proteins encoded by the 37 phage genomes formed 737 groups and 974 orphans. Notably, over half of the proteins encoded by the Acinetobacter phages are of unknown function. The comparative analysis and clustering presented enables an updated taxonomic framing of these clades.

  10. Comparative analysis of tumor spheroid generation techniques for differential in vitro drug toxicity

    Science.gov (United States)

    Raghavan, Shreya; Rowley, Katelyn R.; Mehta, Geeta

    2016-01-01

    Multicellular tumor spheroids are powerful in vitro models to perform preclinical chemosensitivity assays. We compare different methodologies to generate tumor spheroids in terms of resultant spheroid morphology, cellular arrangement and chemosensitivity. We used two cancer cell lines (MCF7 and OVCAR8) to generate spheroids using i) hanging drop array plates; ii) liquid overlay on ultra-low attachment plates; iii) liquid overlay on ultra-low attachment plates with rotating mixing (nutator plates). Analysis of spheroid morphometry indicated that cellular compaction was increased in spheroids generated on nutator and hanging drop array plates. Collagen staining also indicated higher compaction and remodeling in tumor spheroids on nutator and hanging drop arrays compared to conventional liquid overlay. Consequently, spheroids generated on nutator or hanging drop plates had increased chemoresistance to cisplatin treatment (20-60% viability) compared to spheroids on ultra low attachment plates (10-20% viability). Lastly, we used a mathematical model to demonstrate minimal changes in oxygen and cisplatin diffusion within experimentally generated spheroids. Our results demonstrate that in vitro methods of tumor spheroid generation result in varied cellular arrangement and chemosensitivity. PMID:26918944

  11. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  12. A comparative analysis of projected impacts of climate change on river runoff from global and catchment-scale hydrological models

    Science.gov (United States)

    Gosling, S. N.; Taylor, R. G.; Arnell, N. W.; Todd, M. C.

    2011-01-01

    We present a comparative analysis of projected impacts of climate change on river runoff from two types of distributed hydrological model, a global hydrological model (GHM) and catchment-scale hydrological models (CHM). Analyses are conducted for six catchments that are global in coverage and feature strong contrasts in spatial scale as well as climatic and developmental conditions. These include the Liard (Canada), Mekong (SE Asia), Okavango (SW Africa), Rio Grande (Brazil), Xiangxi (China) and Harper's Brook (UK). A single GHM (Mac-PDM.09) is applied to all catchments whilst different CHMs are applied for each catchment. The CHMs include SLURP v. 12.2 (Liard), SLURP v. 12.7 (Mekong), Pitman (Okavango), MGB-IPH (Rio Grande), AV-SWAT-X 2005 (Xiangxi) and Cat-PDM (Harper's Brook). The CHMs typically simulate water resource impacts based on a more explicit representation of catchment water resources than that available from the GHM and the CHMs include river routing, whereas the GHM does not. Simulations of mean annual runoff, mean monthly runoff and high (Q5) and low (Q95) monthly runoff under baseline (1961-1990) and climate change scenarios are presented. We compare the simulated runoff response of each hydrological model to (1) prescribed increases in global-mean air temperature of 1.0, 2.0, 3.0, 4.0, 5.0 and 6.0 °C relative to baseline from the UKMO HadCM3 Global Climate Model (GCM) to explore response to different amounts of climate forcing, and (2) a prescribed increase in global-mean air temperature of 2.0 °C relative to baseline for seven GCMs to explore response to climate model structural uncertainty. We find that the differences in projected changes of mean annual runoff between the two types of hydrological model can be substantial for a given GCM (e.g. an absolute GHM-CHM difference in mean annual runoff percentage change for UKMO HadCM3 2 °C warming of up to 25%), and they are generally larger for indicators of high and low monthly runoff. However

  13. A comparative analysis of projected impacts of climate change on river runoff from global and catchment-scale hydrological models

    Directory of Open Access Journals (Sweden)

    S. N. Gosling

    2011-01-01

    Full Text Available We present a comparative analysis of projected impacts of climate change on river runoff from two types of distributed hydrological model, a global hydrological model (GHM and catchment-scale hydrological models (CHM. Analyses are conducted for six catchments that are global in coverage and feature strong contrasts in spatial scale as well as climatic and developmental conditions. These include the Liard (Canada, Mekong (SE Asia, Okavango (SW Africa, Rio Grande (Brazil, Xiangxi (China and Harper's Brook (UK. A single GHM (Mac-PDM.09 is applied to all catchments whilst different CHMs are applied for each catchment. The CHMs include SLURP v. 12.2 (Liard, SLURP v. 12.7 (Mekong, Pitman (Okavango, MGB-IPH (Rio Grande, AV-SWAT-X 2005 (Xiangxi and Cat-PDM (Harper's Brook. The CHMs typically simulate water resource impacts based on a more explicit representation of catchment water resources than that available from the GHM and the CHMs include river routing, whereas the GHM does not. Simulations of mean annual runoff, mean monthly runoff and high (Q5 and low (Q95 monthly runoff under baseline (1961–1990 and climate change scenarios are presented. We compare the simulated runoff response of each hydrological model to (1 prescribed increases in global-mean air temperature of 1.0, 2.0, 3.0, 4.0, 5.0 and 6.0 °C relative to baseline from the UKMO HadCM3 Global Climate Model (GCM to explore response to different amounts of climate forcing, and (2 a prescribed increase in global-mean air temperature of 2.0 °C relative to baseline for seven GCMs to explore response to climate model structural uncertainty.

    We find that the differences in projected changes of mean annual runoff between the two types of hydrological model can be substantial for a given GCM (e.g. an absolute GHM-CHM difference in mean annual runoff percentage change for UKMO HadCM3 2 °C warming of up to 25%, and they are generally larger for indicators of high and low monthly runoff

  14. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  15. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  16. comparative analysis of the compressive strength of hollow

    African Journals Online (AJOL)

    user

    2016-04-02

    Apr 2, 2016 ... Previous analysis showed that cavity size and number on one hand and combinations thickness affect the compressive strength of hollow sandcrete blocks. Series arrangement of the cavities is common but parallel arrangement has been recommended. This research performed a comparative analysis of ...

  17. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  18. Initial Implementation of a comparative Data Analysis Ontology

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    2009-01-01

    Full Text Available Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: “Operational Taxonomic Units” (OTUs, representing the entities to be compared; “character-state data” representing the observations compared among OTUs; “phylogenetic tree”, representing the historical path of evolution among the entities; and “transitions”, the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL, we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO. CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc. that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  19. Initial Implementation of a Comparative Data Analysis Ontology

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    2009-07-01

    Full Text Available Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: “Operational Taxonomic Units” (OTUs, representing the entities to be compared; “character-state data” representing the observations compared among OTUs; “phylogenetic tree”, representing the historical path of evolution among the entities; and “transitions”, the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL, we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO. CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc. that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  20. On the applications of nanofluids to enhance the performance of solar collectors: A comparative analysis of Atangana-Baleanu and Caputo-Fabrizio fractional models

    Science.gov (United States)

    Sheikh, Nadeem Ahmad; Ali, Farhad; Khan, Ilyas; Gohar, Madeha; Saqib, Muhammad

    2017-12-01

    In the modern era, solar energy has gained the consideration of researchers to a great deal. Apparently, the reasons are twofold: firstly, the researchers are concerned to design new devices like solar collectors, solar water heaters, etc. Secondly, the use of new approaches to improve the performance of solar energy equipment. The aim of this paper is to model the problem of the enhancement of heat transfer rate of solar energy devices, using nanoparticles and to find the exact solutions of the considered problem. The classical model is transformed to a generalized model using two different types of time-fractional derivatives, namely the Caputo-Fabrizio and Atangana-Baleanu derivatives and their comparative analysis has been presented. The solutions for the flow profile and heat transfer are presented using the Laplace transform method. The variation in the heat transfer rate has been observed for different nanoparticles and their different volume fractions. Theoretical results show that by adding aluminum oxide nanoparticles, the efficiency of solar collectors may be enhanced by 5.2%. Furthermore, the effect of volume friction of nanoparticles on velocity distribution has been discussed in graphical illustrations. The solutions are reduced to the corresponding classical model of nanofluid.

  1. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.

    Science.gov (United States)

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian

    2017-04-27

    The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in genomics projects, while eliminating the need for on-site computational resources and expertise.

  2. Key Process Uncertainties in Soil Carbon Dynamics: Comparing Multiple Model Structures and Observational Meta-analysis

    Science.gov (United States)

    Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.

    2016-12-01

    Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over

  3. COMPARATIVE ANALYSIS OF THE ORGANIZATIONAL MODELS IN ORGANIC FARMING

    Directory of Open Access Journals (Sweden)

    Alexandra MUSCĂNESCU

    2013-10-01

    Full Text Available As regards to organic farming, organic farms have a lot of shortcomings in ensuring smooth organization of production due to climatic factors or crop sensitivity and action of pests and diseases, but especially to the high cost of inputs, reduced subsidies and difficulties in obtaining fair prices on the market. Understanding how the organizational structure of the business can compete to ensure efficiency at farm level is an important means to resolve these deficiencies. In this context, this paper aims to identify the characteristics of the organization of organic crop farms starting from an interview-based analysis of two large crop specialised farms in Tulcea and Calaraşi Counties. The information obtained through this method of investigation has been translated into a SWOT analysis and represented the basis for comparison with information gathered from other interviews from two organic farms in Scotland. The main conclusions we reached highlight two types of organization systems, one without integration and another with supply chain integration, very similar to the Scottish ones, but also showing a very obvious difference in the mentality of the farm owners; Romanians focusing on meeting the conditions for certification and maintenance of crops in organic, and the Scots at finding new markets.

  4. rCAD: A Novel Database Schema for the Comparative Analysis of RNA.

    Science.gov (United States)

    Ozer, Stuart; Doshi, Kishore J; Xu, Weijia; Gutell, Robin R

    2011-12-31

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios.

  5. Value Frameworks in Oncology: Comparative Analysis and Implications to the Pharmaceutical Industry.

    Science.gov (United States)

    Slomiany, Mark; Madhavan, Priya; Kuehn, Michael; Richardson, Sasha

    2017-07-01

    As the cost of oncology care continues to rise, composite value models that variably capture the diverse concerns of patients, physicians, payers, policymakers, and the pharmaceutical industry have begun to take shape. To review the capabilities and limitations of 5 of the most notable value frameworks in oncology that have emerged in recent years and to compare their relative value and application among the intended stakeholders. We compared the methodology of the American Society of Clinical Oncology (ASCO) Value Framework (version 2.0), the National Comprehensive Cancer Network Evidence Blocks, Memorial Sloan Kettering Cancer Center DrugAbacus, the Institute for Clinical and Economic Review Value Assessment Framework, and the European Society for Medical Oncology Magnitude of Clinical Benefit Scale, using a side-by-side comparative approach in terms of the input, scoring methodology, and output of each framework. In addition, we gleaned stakeholder insights about these frameworks and their potential real-world applications through dialogues with physicians and payers, as well as through secondary research and an aggregate analysis of previously published survey results. The analysis identified several framework-specific themes in their respective focus on clinical trial elements, breadth of evidence, evidence weighting, scoring methodology, and value to stakeholders. Our dialogues with physicians and our aggregate analysis of previous surveys revealed a varying level of awareness of, and use of, each of the value frameworks in clinical practice. For example, although the ASCO Value Framework appears nascent in clinical practice, physicians believe that the frameworks will be more useful in practice in the future as they become more established and as their outputs are more widely accepted. Along with patients and payers, who bear the burden of treatment costs, physicians and policymakers have waded into the discussion of defining value in oncology care, as well

  6. Comparative analysis of early ontogeny in Bursatella leachii and Aplysia californica

    Directory of Open Access Journals (Sweden)

    Zer Vue

    2014-12-01

    Full Text Available Opisthobranch molluscs exhibit fascinating body plans associated with the evolution of shell loss in multiple lineages. Sea hares in particular are interesting because Aplysia californica is a well-studied model organism that offers a large suite of genetic tools. Bursatella leachii is a related tropical sea hare that lacks a shell as an adult and therefore lends itself to comparative analysis with A. californica. We have established an enhanced culturing procedure for B. leachii in husbandry that enabled the study of shell formation and loss in this lineage with respect to A. californica life staging.

  7. Comparative Assessment of Two Vegetation Fractional Cover Estimating Methods and Their Impacts on Modeling Urban Latent Heat Flux Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2017-05-01

    Full Text Available Quantifying vegetation fractional cover (VFC and assessing its role in heat fluxes modeling using medium resolution remotely sensed data has received less attention than it deserves in heterogeneous urban regions. This study examined two approaches (Normalized Difference Vegetation Index (NDVI-derived and Multiple Endmember Spectral Mixture Analysis (MESMA-derived methods that are commonly used to map VFC based on Landsat imagery, in modeling surface heat fluxes in urban landscape. For this purpose, two different heat flux models, Two-source energy balance (TSEB model and Pixel Component Arranging and Comparing Algorithm (PCACA model, were adopted for model evaluation and analysis. A comparative analysis of the NDVI-derived and MESMA-derived VFCs showed that the latter achieved more accurate estimates in complex urban regions. When the two sources of VFCs were used as inputs to both TSEB and PCACA models, MESMA-derived urban VFC produced more accurate urban heat fluxes (Bowen ratio and latent heat flux relative to NDVI-derived urban VFC. Moreover, our study demonstrated that Landsat imagery-retrieved VFC exhibited greater uncertainty in obtaining urban heat fluxes for the TSEB model than for the PCACA model.

  8. Comparative analysis of customer satisfaction in postal and banking services

    Directory of Open Access Journals (Sweden)

    Ratković Milijanka

    2017-01-01

    Full Text Available The goal of this study is a comparative analysis of customer satisfaction towards postal and banking services in Serbia. In addition, this paper should provide guidance on how managements of the Post Office and the Bank should behave on the market. The survey was conducted throughout the whole Serbian territory. The subject of the research is to measure the perception of postal and banking services, in order to assess the quality of services and the impact of expectations on the level of perceived quality. Testing and final conclusions about the level of quality of postal and banking services was carried out on the basis of the existing literature and modified SERVQUAL model.

  9. Stochastic modeling of friction force and vibration analysis of a mechanical system using the model

    International Nuclear Information System (INIS)

    Kang, Won Seok; Choi, Chan Kyu; Yoo, Hong Hee

    2015-01-01

    The squeal noise generated from a disk brake or chatter occurred in a machine tool primarily results from friction-induced vibration. Since friction-induced vibration is usually accompanied by abrasion and lifespan reduction of mechanical parts, it is necessary to develop a reliable analysis model by which friction-induced vibration phenomena can be accurately analyzed. The original Coulomb's friction model or the modified Coulomb friction model employed in most commercial programs employs deterministic friction coefficients. However, observing friction phenomena between two contact surfaces, one may observe that friction coefficients keep changing due to the unevenness of contact surface, temperature, lubrication and humidity. Therefore, in this study, friction coefficients are modeled as random parameters that keep changing during the motion of a mechanical system undergoing friction force. The integrity of the proposed stochastic friction model was validated by comparing the analysis results obtained by the proposed model with experimental results.

  10. Comparison of composite rotor blade models: A coupled-beam analysis and an MSC/NASTRAN finite-element model

    Science.gov (United States)

    Hodges, Robert V.; Nixon, Mark W.; Rehfield, Lawrence W.

    1987-01-01

    A methodology was developed for the structural analysis of composite rotor blades. This coupled-beam analysis is relatively simple to use compared with alternative analysis techniques. The beam analysis was developed for thin-wall single-cell rotor structures and includes the effects of elastic coupling. This paper demonstrates the effectiveness of the new composite-beam analysis method through comparison of its results with those of an established baseline analysis technique. The baseline analysis is an MSC/NASTRAN finite-element model built up from anisotropic shell elements. Deformations are compared for three linear static load cases of centrifugal force at design rotor speed, applied torque, and lift for an ideal rotor in hover. A D-spar designed to twist under axial loading is the subject of the analysis. Results indicate the coupled-beam analysis is well within engineering accuracy.

  11. Using multi-criteria analysis of simulation models to understand complex biological systems

    Science.gov (United States)

    Maureen C. Kennedy; E. David. Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  12. Recent results on the spatiotemporal modelling and comparative analysis of Black Death and bubonic plague epidemics

    Science.gov (United States)

    Christakos, G.; Olea, R.A.; Yu, H.-L.

    2007-01-01

    Background: This work demonstrates the importance of spatiotemporal stochastic modelling in constructing maps of major epidemics from fragmentary information, assessing population impacts, searching for possible etiologies, and performing comparative analysis of epidemics. Methods: Based on the theory previously published by the authors and incorporating new knowledge bases, informative maps of the composite space-time distributions were generated for important characteristics of two major epidemics: Black Death (14th century Western Europe) and bubonic plague (19th-20th century Indian subcontinent). Results: The comparative spatiotemporal analysis of the epidemics led to a number of interesting findings: (1) the two epidemics exhibited certain differences in their spatiotemporal characteristics (correlation structures, trends, occurrence patterns and propagation speeds) that need to be explained by means of an interdisciplinary effort; (2) geographical epidemic indicators confirmed in a rigorous quantitative manner the partial findings of isolated reports and time series that Black Death mortality was two orders of magnitude higher than that of bubonic plague; (3) modern bubonic plague is a rural disease hitting harder the small villages in the countryside whereas Black Death was a devastating epidemic that indiscriminately attacked large urban centres and the countryside, and while the epidemic in India lasted uninterruptedly for five decades, in Western Europe it lasted three and a half years; (4) the epidemics had reverse areal extension features in response to annual seasonal variations. Temperature increase at the end of winter led to an expansion of infected geographical area for Black Death and a reduction for bubonic plague, reaching a climax at the end of spring when the infected area in Western Europe was always larger than in India. Conversely, without exception, the infected area during winter was larger for the Indian bubonic plague; (5) during the

  13. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  14. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    Science.gov (United States)

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  15. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    Science.gov (United States)

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  16. Standardization: using comparative maintenance costs in an economic analysis

    OpenAIRE

    Clark, Roger Nelson

    1987-01-01

    Approved for public release; distribution is unlimited This thesis investigates the use of comparative maintenance costs of functionally interchangeable equipments in similar U.S. Navy shipboard applications in an economic analysis of standardization. The economics of standardization, life-cycle costing, and the Navy 3-M System are discussed in general. An analysis of 3-M System maintenance costs for a selected equipment, diesel engines, is conducted. The potential use of comparative ma...

  17. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A comparative analysis of Serbian phonemes: Linear and non-linear models/Uporedna analiza fonema srpskog jezika: linearni i nelinearni modeli

    Directory of Open Access Journals (Sweden)

    Danijela D. Protić

    2014-10-01

    Full Text Available This paper presents the results of a comparative analysis of Serbian phonemes. The characteristics of vowels are quasi-periodicity and clearly visible formants. Non-vowels are short-term quasi-periodical signals having a low power excitation signal. For the purpose of this work, speech production systems were modelled with linear AR models and the corresponding non-linear models, based feed-forward neural networks with one hidden-layer. Sum squared error minimization as well as the back-propagation algorithm were used to train models. The selection of the optimal model was based on two stopping criteria: the normalized mean squares test error and the final prediction error. The Levenberg-Marquart method was used for the Hessian matrix calculation. The Optimal Brain Surgeon method was used for pruning. The generalization properties, based on the time-domain and signal spectra of outputs at hidden-layer neurons, are presented. / U radu je prikazana analiza karakteristika vokala i nevokala srpskog jezika. Vokale karakteriše kvaziperiodičnost i spektar snage signala sa dobro uočljivim formantima. Nevokale karakteriše kratkotrajna kvaziperiodičnost i mala snaga pobudnog signala. Vokali i nevokali modelovani su linearnim AR modelima i odgovarajućim nelinearnim modelima koji su generisani kao feed-forward neuronska mreža sa jednim skrivenim slojem. U procesu modelovanja korišćena je minimizacija srednje kvadratne greške sa propagacijom unazad, a kriterijum izbora optimalnog modela jeste zaustavljanje obučavanja, kada normalizovana srednja kvadratna test greška ili finalna greška predikcije dostignu minimalnu vrednost. LM metod korišćen je za proračun inverzne Hessianove matrice, a za pruning je upotrebljen Optimal Brain Surgeon. Prikazana su generalizaciona svojstva signala u vremenskom i frekvencijskom domenu, a kroskorelacionom analizom utvrđen je odnos signala na izlazima neurona skrivenog sloja.

  19. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  20. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  1. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  2. Turbulent diffusion modelling for windflow and dispersion analysis

    International Nuclear Information System (INIS)

    Bartzis, J.G.

    1988-01-01

    The need for simple but reliable models for turbulent diffusion for windflow and atmospheric dispersion analysis is a necessity today if one takes into consideration the relatively high demand in computer time and costs for such an analysis, arising mainly from the often large solution domains needed, the terrain complexity and the transient nature of the phenomena. In the accident consequence assessment often there is a need for a relatively large number of cases to be analysed increasing further the computer time and costs. Within the framework of searching for relatively simple and universal eddy viscosity/diffusivity models, a new three dimensional non isotropic model is proposed applicable to any domain complexity and any atmospheric stability conditions. The model utilizes the transport equation for turbulent kinetic energy but introduces a new approach in effective length scale estimation based on the flow global characteristics and local atmospheric stability. The model is discussed in detail and predictions are given for flow field and boundary layer thickness. The results are compared with experimental data with satisfactory results

  3. Comparative proteomics analysis of oral cancer cell lines: identification of cancer associated proteins

    Science.gov (United States)

    2014-01-01

    Background A limiting factor in performing proteomics analysis on cancerous cells is the difficulty in obtaining sufficient amounts of starting material. Cell lines can be used as a simplified model system for studying changes that accompany tumorigenesis. This study used two-dimensional gel electrophoresis (2DE) to compare the whole cell proteome of oral cancer cell lines vs normal cells in an attempt to identify cancer associated proteins. Results Three primary cell cultures of normal cells with a limited lifespan without hTERT immortalization have been successfully established. 2DE was used to compare the whole cell proteome of these cells with that of three oral cancer cell lines. Twenty four protein spots were found to have changed in abundance. MALDI TOF/TOF was then used to determine the identity of these proteins. Identified proteins were classified into seven functional categories – structural proteins, enzymes, regulatory proteins, chaperones and others. IPA core analysis predicted that 18 proteins were related to cancer with involvements in hyperplasia, metastasis, invasion, growth and tumorigenesis. The mRNA expressions of two proteins – 14-3-3 protein sigma and Stress-induced-phosphoprotein 1 – were found to correlate with the corresponding proteins’ abundance. Conclusions The outcome of this analysis demonstrated that a comparative study of whole cell proteome of cancer versus normal cell lines can be used to identify cancer associated proteins. PMID:24422745

  4. Comparative Party System Analysis in Central and Eastern Europe: the Case of the Baltic States

    Directory of Open Access Journals (Sweden)

    Tõnis Saarts

    2011-11-01

    Full Text Available The nature of the party systems in Central and Eastern Europe (CEE has puzzled many scholars. High instability of the party systems and their specific evolution makes the application of theoretical models designed predominately for Western European party politics problematic. The paper puts forward the argument that we should further elaborate and specify the models for a small N comparative party system analysis in CEE countries and to incorporate some region specific components into the framework. The essential dimensions included into proposed comparative framework are as follows: (1 the stability of the party system, (2 party system fragmentation, (3 parties´ penetration into society, (4 the ideology and origins of the major parties, (5 the dominant cleavage constellations framing the party competition (6 the strength of the party organizations. The above-mentioned dimensions are expected to capture the most important aspects that make the difference between the party systems in general, and each dimension is complemented with the specific additional variables suitable for party system analysis in CEE in particular. The framework will be tested on the Baltic States, which party systems are often regarded to be very similar to each other. However, the analysis will demonstrate that based on the above-mentioned framework, very significant and noteworthy differences will be revealed.

  5. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  6. Structural modeling and docking studies of ribose 5-phosphate isomerase from Leishmania major and Homo sapiens: a comparative analysis for Leishmaniasis treatment.

    Science.gov (United States)

    Capriles, Priscila V S Z; Baptista, Luiz Phillippe R; Guedes, Isabella A; Guimarães, Ana Carolina R; Custódio, Fabio L; Alves-Ferreira, Marcelo; Dardenne, Laurent E

    2015-02-01

    Leishmaniases are caused by protozoa of the genus Leishmania and are considered the second-highest cause of death worldwide by parasitic infection. The drugs available for treatment in humans are becoming ineffective mainly due to parasite resistance; therefore, it is extremely important to develop a new chemotherapy against these parasites. A crucial aspect of drug design development is the identification and characterization of novel molecular targets. In this work, through an in silico comparative analysis between the genomes of Leishmania major and Homo sapiens, the enzyme ribose 5-phosphate isomerase (R5PI) was indicated as a promising molecular target. R5PI is an important enzyme that acts in the pentose phosphate pathway and catalyzes the interconversion of d-ribose-5-phosphate (R5P) and d-ribulose-5-phosphate (5RP). R5PI activity is found in two analogous groups of enzymes called RpiA (found in H. sapiens) and RpiB (found in L. major). Here, we present the first report of the three-dimensional (3D) structures and active sites of RpiB from L. major (LmRpiB) and RpiA from H. sapiens (HsRpiA). Three-dimensional models were constructed by applying a hybrid methodology that combines comparative and ab initio modeling techniques, and the active site was characterized based on docking studies of the substrates R5P (furanose and ring-opened forms) and 5RP. Our comparative analyses show that these proteins are structural analogs and that distinct residues participate in the interconversion of R5P and 5RP. We propose two distinct reaction mechanisms for the reversible isomerization of R5P to 5RP, which is catalyzed by LmRpiB and HsRpiA. We expect that the present results will be important in guiding future molecular modeling studies to develop new drugs that are specially designed to inhibit the parasitic form of the enzyme without significant effects on the human analog. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Comparative Study of Lectin Domains in Model Species: New Insights into Evolutionary Dynamics

    Directory of Open Access Journals (Sweden)

    Sofie Van Holle

    2017-05-01

    Full Text Available Lectins are present throughout the plant kingdom and are reported to be involved in diverse biological processes. In this study, we provide a comparative analysis of the lectin families from model species in a phylogenetic framework. The analysis focuses on the different plant lectin domains identified in five representative core angiosperm genomes (Arabidopsis thaliana, Glycine max, Cucumis sativus, Oryza sativa ssp. japonica and Oryza sativa ssp. indica. The genomes were screened for genes encoding lectin domains using a combination of Basic Local Alignment Search Tool (BLAST, hidden Markov models, and InterProScan analysis. Additionally, phylogenetic relationships were investigated by constructing maximum likelihood phylogenetic trees. The results demonstrate that the majority of the lectin families are present in each of the species under study. Domain organization analysis showed that most identified proteins are multi-domain proteins, owing to the modular rearrangement of protein domains during evolution. Most of these multi-domain proteins are widespread, while others display a lineage-specific distribution. Furthermore, the phylogenetic analyses reveal that some lectin families evolved to be similar to the phylogeny of the plant species, while others share a closer evolutionary history based on the corresponding protein domain architecture. Our results yield insights into the evolutionary relationships and functional divergence of plant lectins.

  8. Verify Super Double-Heterogeneous Spherical Lattice Model for Equilibrium Fuel Cycle Analysis AND HTR Spherical Super Lattice Model for Equilibrium Fuel Cycle Analysis

    International Nuclear Information System (INIS)

    Gray S. Chang

    2005-01-01

    The currently being developed advanced High Temperature gas-cooled Reactors (HTR) is able to achieve a simplification of safety through reliance on innovative features and passive systems. One of the innovative features in these HTRs is reliance on ceramic-coated fuel particles to retain the fission products even under extreme accident conditions. Traditionally, the effect of the random fuel kernel distribution in the fuel pebble/block is addressed through the use of the Dancoff correction factor in the resonance treatment. However, the Dancoff correction factor is a function of burnup and fuel kernel packing factor, which requires that the Dancoff correction factor be updated during Equilibrium Fuel Cycle (EqFC) analysis. An advanced KbK-sph model and whole pebble super lattice model (PSLM), which can address and update the burnup dependent Dancoff effect during the EqFC analysis. The pebble homogeneous lattice model (HLM) is verified by the burnup characteristics with the double-heterogeneous KbK-sph lattice model results. This study summarizes and compares the KbK-sph lattice model and HLM burnup analyzed results. Finally, we discuss the Monte-Carlo coupling with a fuel depletion and buildup code--ORIGEN-2 as a fuel burnup analysis tool and its PSLM calculated results for the HTR EqFC burnup analysis

  9. Public transportation systems: Comparative analysis of quality of service

    Energy Technology Data Exchange (ETDEWEB)

    Negri, L.; Florio, L. (Rome Univ. La Sapienza (Italy). Facolta' di Ingegneria, Dipt. di Idraulica, Trasporti e Strade)

    The evaluation, choice and design of public transportation systems for urban areas requires, in addition to consolidated use parameters, other dimensions essential to supply-demand qualiflcative realignment, e.g.: 'door-to-door time' which allows system differentiation in terms of commercial velocity, frequency and length of route; technical productivity expressed as 'transport power' and 'specific transport power'; and 'system/service quality'. By the means of surveys, these factors can be incorporated into suitable mathematical models representing, in a complete and reliable way, all the functions which a given system actually delivers and those functions which it is expected to deliver by its users. This paper illustrates the application of these concepts in a comparative analysis of different public transportation options - light rail rapid transit, tram and bus networks.

  10. Analysis of WWER-440 and PWR RPV welds surveillance data to compare irradiation damage evolution

    Energy Technology Data Exchange (ETDEWEB)

    Debarberis, L. [Joint Research Centre of the European Commission, Institute for Energy, P.O. Box 2, 1755 ZG Petten (Netherlands)]. E-mail: luigi.debarberis@cec.eu.int; Acosta, B. [Joint Research Centre of the European Commission, Institute for Energy, P.O. Box 2, 1755 ZG Petten (Netherlands)]. E-mail: beatriz.acosta-iborra@jrc.nl; Zeman, A. [Joint Research Centre of the European Commission, Institute for Energy, P.O. Box 2, 1755 ZG Petten (Netherlands); Sevini, F. [Joint Research Centre of the European Commission, Institute for Energy, P.O. Box 2, 1755 ZG Petten (Netherlands); Ballesteros, A. [Tecnatom, Avd. Montes de Oca 1, San Sebasitan de los Reyes, E-28709 Madrid (Spain); Kryukov, A. [Russian Research Centre Kurchatov Institute, Kurchatov Square 1, 123182 Moscow (Russian Federation); Gillemot, F. [AEKI Atomic Research Institute, Konkoly Thege M. ut 29-33, 1121 Budapest (Hungary); Brumovsky, M. [NRI, Nuclear Research Institute, Husinec-Rez 130, 25068 Rez (Czech Republic)

    2006-04-15

    It is known that for Russian-type and Western water reactor pressure vessel steels there is a similar degradation in mechanical properties during equivalent neutron irradiation. Available surveillance results from WWER and PWR vessels are used in this article to compare irradiation damage evolution for the different reactor pressure vessel welds. The analysis is done through the semi-mechanistic model for radiation embrittlement developed by JRC-IE. Consistency analysis with BWR vessel materials and model alloys has also been performed within this study. Globally the two families of studied materials follow similar trends regarding the evolution of irradiation damage. Moreover in the high fluence range typical of operation of WWER the radiation stability of these vessels is greater than the foreseen one for PWR.

  11. Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.

    Science.gov (United States)

    Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan

    2017-10-01

    Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.

  12. Simplified model for DNB analysis

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1979-08-01

    In a pressurized water nuclear reactor (PWR), the power of operation is restricted by the possibility of the occurrence of the departure from nucleate boiling called DNB (Departure from Nucleate Boiling) in the hottest channel of the core. The present work proposes a simplified model that analyses the thermal-hydraulic conditions of the coolant in the hottest channel of PWRs with the objective to evaluate BNB in this channel. For this the coupling between the hot channel and typical nominal channels assumed imposing the existence of a cross flow between these channels in a way that a uniforme pressure axial distribution results along the channels. The model is applied for Angra-I reactor and the results are compared with those of Final Safety Analysis Report (FSAR) obtained by Westinghouse through the THINC program, beeing considered satisfactory (Author) [pt

  13. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  14. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  15. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  16. Determinants of Banking Credit Default in Indonesia: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Muhammad Imaduddin

    2008-08-01

    Full Text Available This study aims to analyze the determinants of Islamic banking credit default compared with conventional banking in Indonesia. This study utilized timeseries analysis, by which ordinary least square method is adopted. 40 monthly data observations from January 2003 until April 2006 are used. The study is divided into two models, namely Islamic banking model and conventional banking model. The values of non-performing financing (NPF in Islamic banking and non-performing loan (NPL in conventional banking are treated as the dependent variables. The results showed that two-month lagged of non-performing financing (NPF, total asset (ASSET, the amount of thirdparty-funds (TPF, one-month lagged of total financing (DFIN, and growthof gross-domestic product (GDPG variables have significant impact to the ratio of non-performing financing (NPF in Islamic banking. Meanwhile, the three-month lagged of non-performing loan (DDDNPL, total asset (CASSET, three-month as well as two-month period lagged of total loan (DDDCRED and DDCRED, inter-bank money market (PUAB, and growth of gross-domestic (GDPG are significant to influence the ratio of non-performing loan (NPL in conventional banking. The result also implied that the general election in 2004 had a significant influence to the ratio of non-performing financing (NPF in Islamic banking.Even tough from the outset, it seems Islamic banking has a better performance than conventional banking by having a relatively low NPF, this study, however, has found the opposite. Albeit, Islamic banking showing a good long-runas well as short-run dynamics among all variables in the beginning, after modifying the model into autoregressive in the main analysis, results showed that conventional banking has a better performance than Islamic banking with higher correlation of determination. In this regard, we cannot assume thatIslamic banking is performing poorly in managing credit default problems. This is because the result

  17. Stress distribution patterns of implant supported overdentures-analog versus finite element analysis: A comparative in-vitro study

    Directory of Open Access Journals (Sweden)

    Soumyadev Satpathy

    2015-01-01

    Full Text Available Aims and Objectives: The aim of this study was to asses & compare the load transfer characteristics of Ball/O-ring and Bar/Clip attachment systems in implant supported overdentures using analog and finite element analysis models. Methodology: For the analog part of the study, castable bar was used for the bar and clip attachment and a metallic housing with a rubber O-ring component was used for the ball/O-ring attachment. The stress on the implant surface was measured using the strain-gauge technique. For the finite element analysis, the model were fabricated and load applications were done in a similar manner as in analog study. Results: The difference between both the attachment systems was found to be statistically significant (P<0.001. Conclusion: Ball/O-ring attachment system transmitted lesser amount of stresses to the implants on the non-loading side, as compared to the Bar-Clip attachment system. When overall stress distribution is compared, the Bar-Clip attachment seems to perform better than the Ball/O-ring attachment, because the force was distributed better.

  18. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    OpenAIRE

    Roslyn De Braine; Gert Roodt

    2011-01-01

    Orientation: Research shows that engaged employees experience high levels of energy and strong identification with their work, hence this study’s focus on work identity and dedication. Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R) as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component o...

  19. Error analysis of short term wind power prediction models

    International Nuclear Information System (INIS)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco

    2011-01-01

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  20. Error analysis of short term wind power prediction models

    Energy Technology Data Exchange (ETDEWEB)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco [Dipartimento di Ingegneria dell' Innovazione, Universita del Salento, Via per Monteroni, 73100 Lecce (Italy)

    2011-04-15

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  1. Managing Information Uncertainty in Wave Height Modeling for the Offshore Structural Analysis through Random Set

    Directory of Open Access Journals (Sweden)

    Keqin Yan

    2017-01-01

    Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.

  2. Comparability of river suspended-sediment sampling and laboratory analysis methods

    Science.gov (United States)

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  3. Factor Analysis of Drawings: Application to college student models of the greenhouse effect

    Science.gov (United States)

    Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel

    2015-09-01

    Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.

  4. A new tool for accelerator system modeling and analysis

    International Nuclear Information System (INIS)

    Gillespie, G.H.; Hill, B.W.; Jameson, R.A.

    1994-01-01

    A novel computer code is being developed to generate system level designs of radiofrequency ion accelerators. The goal of the Accelerator System Model (ASM) code is to create a modeling and analysis tool that is easy to use, automates many of the initial design calculations, supports trade studies used in assessing alternate designs and yet is flexible enough to incorporate new technology concepts as they emerge. Hardware engineering parameters and beam dynamics are modeled at comparable levels of fidelity. Existing scaling models of accelerator subsystems were sued to produce a prototype of ASM (version 1.0) working within the Shell for Particle Accelerator Related Codes (SPARC) graphical user interface. A small user group has been testing and evaluating the prototype for about a year. Several enhancements and improvements are now being developed. The current version (1.1) of ASM is briefly described and an example of the modeling and analysis capabilities is illustrated

  5. YersiniaBase: a genomic resource and analysis platform for comparative analysis of Yersinia.

    Science.gov (United States)

    Tan, Shi Yang; Dutta, Avirup; Jakubovics, Nicholas S; Ang, Mia Yang; Siow, Cheuk Chuen; Mutha, Naresh Vr; Heydari, Hamed; Wee, Wei Yee; Wong, Guat Jah; Choo, Siew Woh

    2015-01-16

    Yersinia is a Gram-negative bacteria that includes serious pathogens such as the Yersinia pestis, which causes plague, Yersinia pseudotuberculosis, Yersinia enterocolitica. The remaining species are generally considered non-pathogenic to humans, although there is evidence that at least some of these species can cause occasional infections using distinct mechanisms from the more pathogenic species. With the advances in sequencing technologies, many genomes of Yersinia have been sequenced. However, there is currently no specialized platform to hold the rapidly-growing Yersinia genomic data and to provide analysis tools particularly for comparative analyses, which are required to provide improved insights into their biology, evolution and pathogenicity. To facilitate the ongoing and future research of Yersinia, especially those generally considered non-pathogenic species, a well-defined repository and analysis platform is needed to hold the Yersinia genomic data and analysis tools for the Yersinia research community. Hence, we have developed the YersiniaBase, a robust and user-friendly Yersinia resource and analysis platform for the analysis of Yersinia genomic data. YersiniaBase has a total of twelve species and 232 genome sequences, of which the majority are Yersinia pestis. In order to smooth the process of searching genomic data in a large database, we implemented an Asynchronous JavaScript and XML (AJAX)-based real-time searching system in YersiniaBase. Besides incorporating existing tools, which include JavaScript-based genome browser (JBrowse) and Basic Local Alignment Search Tool (BLAST), YersiniaBase also has in-house developed tools: (1) Pairwise Genome Comparison tool (PGC) for comparing two user-selected genomes; (2) Pathogenomics Profiling Tool (PathoProT) for comparative pathogenomics analysis of Yersinia genomes; (3) YersiniaTree for constructing phylogenetic tree of Yersinia. We ran analyses based on the tools and genomic data in YersiniaBase and the

  6. Comparative analysis of nonperturbative effects in B→Xulνl decays

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Kramer, Gustav; Yang Jifeng

    2007-01-01

    In order to extract the Cabibbo-Kobayashi-Maskawa matrix element |V ub | from B→X u lν l decays, the overwhelming background from B→X c lν l decays must be reduced by appropriate acceptance cuts. We study the nonperturbative effects due to the motion of the b quark inside the B meson on the phenomenologically relevant decay distributions of B→X u lν l in the presence of such cuts in a comparative analysis based on shape functions and the parton model in the light cone limit. Comparisons with recent data from the CLEO, BABAR, and BELLE collaborations favor the shape-function approach

  7. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  8. ModelMate - A graphical user interface for model analysis

    Science.gov (United States)

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  9. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    Science.gov (United States)

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Comparative Analysis of Terrorists’ Communication Strategies

    Directory of Open Access Journals (Sweden)

    Denis Alexandrovich Zhuravliev

    2011-01-01

    Full Text Available There is a wide-spread approach in a research literature to regard terrorism as a communicative process. From this point of view, the author offers a comparative analysis of the three most common communication strategies of terrorist groups, including transforming the role of the mass media, the Internet and a combined approach. The author also argues that a particular communication strategy determines a structure of a terrorist organization.

  11. The Job Demands-Resources model as predictor of work identity and work engagement: A comparative analysis

    Directory of Open Access Journals (Sweden)

    Roslyn De Braine

    2011-05-01

    Research purpose: This study explored possible differences in the Job Demands-Resources model (JD-R as predictor of overall work engagement, dedication only and work-based identity, through comparative predictive analyses. Motivation for the study: This study may shed light on the dedication component of work engagement. Currently no literature indicates that the JD-R model has been used to predict work-based identity. Research design: A census-based survey was conducted amongst a target population of 23134 employees that yielded a sample of 2429 (a response rate of about 10.5%. The Job Demands- Resources scale (JDRS was used to measure job demands and job resources. A work-based identity scale was developed for this study. Work engagement was studied with the Utrecht Work Engagement Scale (UWES. Factor and reliability analyses were conducted on the scales and general multiple regression models were used in the predictive analyses. Main findings: The JD-R model yielded a greater amount of variance in dedication than in work engagement. It, however, yielded the greatest amount of variance in work-based identity, with job resources being its strongest predictor. Practical/managerial implications: Identification and work engagement levels can be improved by managing job resources and demands. Contribution/value-add: This study builds on the literature of the JD-R model by showing that it can be used to predict work-based identity.

  12. A Comparative Analysis of MOOC (Massive Open Online Course Platforms

    Directory of Open Access Journals (Sweden)

    Maria CONACHE

    2016-01-01

    Full Text Available The MOOC platforms have known a considerable development in recent years due to the enlargement of online space and the shifting between traditional to virtual activities. These plat-forms made it possible for people almost everywhere to take online academic courses offered by top universities via open access to web and with unlimited participation. Thus, it came naturally to us to address the question what makes them so successful? The purpose of this paper is to report comparatively MOOC platforms in terms of features, based on the user’s implication and demands. First, we chose four relevant lifelong learning platforms and then we structured three main categories for the platforms' qualification, depending on which we built our theory regarding the comparison between them. Our analysis consists of three sets of criteria: business model, course design and popularity among online users. Starting from this perspective, we built a range of representative factors for which we highlighted the major aspects for each plat-form in our comparative research

  13. Bayesian uncertainty analysis with applications to turbulence modeling

    International Nuclear Information System (INIS)

    Cheung, Sai Hung; Oliver, Todd A.; Prudencio, Ernesto E.; Prudhomme, Serge; Moser, Robert D.

    2011-01-01

    In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.

  14. Comparing the line broadened quasilinear model to Vlasov code

    International Nuclear Information System (INIS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-01-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations

  15. Comparing the line broadened quasilinear model to Vlasov code

    Energy Technology Data Exchange (ETDEWEB)

    Ghantous, K. [Laboratoire de Physique des Plasmas, Ecole Polytechnique, 91128 Palaiseau Cedex (France); Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States); Berk, H. L. [Institute for Fusion Studies, University of Texas, 2100 San Jacinto Blvd, Austin, Texas 78712-1047 (United States); Gorelenkov, N. N. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543-0451 (United States)

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  16. Comparing the line broadened quasilinear model to Vlasov code

    Science.gov (United States)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  17. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  18. A comparative Thermal Analysis of conventional parabolic receiver tube and Cavity model tube in a Solar Parabolic Concentrator

    Science.gov (United States)

    Arumugam, S.; Ramakrishna, P.; Sangavi, S.

    2018-02-01

    Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.

  19. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model.

    Directory of Open Access Journals (Sweden)

    Elisa Eggerbauer

    2017-06-01

    Full Text Available European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c. with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m. with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n. with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates.

  20. Comparative analysis of European bat lyssavirus 1 pathogenicity in the mouse model.

    Science.gov (United States)

    Eggerbauer, Elisa; Pfaff, Florian; Finke, Stefan; Höper, Dirk; Beer, Martin; Mettenleiter, Thomas C; Nolden, Tobias; Teifke, Jens-Peter; Müller, Thomas; Freuling, Conrad M

    2017-06-01

    European bat lyssavirus 1 is responsible for most bat rabies cases in Europe. Although EBLV-1 isolates display a high degree of sequence identity, different sublineages exist. In individual isolates various insertions and deletions have been identified, with unknown impact on viral replication and pathogenicity. In order to assess whether different genetic features of EBLV-1 isolates correlate with phenotypic changes, different EBLV-1 variants were compared for pathogenicity in the mouse model. Groups of three mice were infected intracranially (i.c.) with 102 TCID50/ml and groups of six mice were infected intramuscularly (i.m.) with 105 TCID50/ml and 102 TCID50/ml as well as intranasally (i.n.) with 102 TCID50/ml. Significant differences in survival following i.m. inoculation with low doses as well as i.n. inoculation were observed. Also, striking variations in incubation periods following i.c. inoculation and i.m. inoculation with high doses were seen. Hereby, the clinical picture differed between general symptoms, spasms and aggressiveness depending on the inoculation route. Immunohistochemistry of mouse brains showed that the virus distribution in the brain depended on the inoculation route. In conclusion, different EBLV-1 isolates differ in pathogenicity indicating variation which is not reflected in studies of single isolates.

  1. Comparison of 3D quantitative structure-activity relationship methods: Analysis of the in vitro antimalarial activity of 154 artemisinin analogues by hypothetical active-site lattice and comparative molecular field analysis

    Science.gov (United States)

    Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.

    1998-03-01

    Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.

  2. Advancing team-based primary health care: a comparative analysis of policies in western Canada.

    Science.gov (United States)

    Suter, Esther; Mallinson, Sara; Misfeldt, Renee; Boakye, Omenaa; Nasmith, Louise; Wong, Sabrina T

    2017-07-17

    We analyzed and compared primary health care (PHC) policies in British Columbia, Alberta and Saskatchewan to understand how they inform the design and implementation of team-based primary health care service delivery. The goal was to develop policy imperatives that can advance team-based PHC in Canada. We conducted comparative case studies (n = 3). The policy analysis included: Context review: We reviewed relevant information (2007 to 2014) from databases and websites. Policy review and comparative analysis: We compared and contrasted publically available PHC policies. Key informant interviews: Key informants (n = 30) validated narratives prepared from the comparative analysis by offering contextual information on potential policy imperatives. Advisory group and roundtable: An expert advisory group guided this work and a key stakeholder roundtable event guided prioritization of policy imperatives. The concept of team-based PHC varies widely across and within the three provinces. We noted policy gaps related to team configuration, leadership, scope of practice, role clarity and financing of team-based care; few policies speak explicitly to monitoring and evaluation of team-based PHC. We prioritized four policy imperatives: (1) alignment of goals and policies at different system levels; (2) investment of resources for system change; (3) compensation models for all members of the team; and (4) accountability through collaborative practice metrics. Policies supporting team-based PHC have been slow to emerge, lacking a systematic and coordinated approach. Greater alignment with specific consideration of financing, reimbursement, implementation mechanisms and performance monitoring could accelerate systemic transformation by removing some well-known barriers to team-based care.

  3. Methods of international health technology assessment agencies for economic evaluations--a comparative analysis.

    Science.gov (United States)

    Mathes, Tim; Jacobs, Esther; Morfeld, Jana-Carina; Pieper, Dawid

    2013-09-30

    The number of Health Technology Assessment (HTA) agencies increases. One component of HTAs are economic aspects. To incorporate economic aspects commonly economic evaluations are performed. A convergence of recommendations for methods of health economic evaluations between international HTA agencies would facilitate the adaption of results to different settings and avoid unnecessary expense. A first step in this direction is a detailed analysis of existing similarities and differences in recommendations to identify potential for harmonization. The objective is to provide an overview and comparison of the methodological recommendations of international HTA agencies for economic evaluations. The webpages of 127 international HTA agencies were searched for guidelines containing recommendations on methods for the preparation of economic evaluations. Additionally, the HTA agencies were requested information on methods for economic evaluations. Recommendations of the included guidelines were extracted in standardized tables according to 13 methodological aspects. All process steps were performed independently by two reviewers. Finally 25 publications of 14 HTA agencies were included in the analysis. Methods for economic evaluations vary widely. The greatest accordance could be found for the type of analysis and comparator. Cost-utility-analyses or cost-effectiveness-analyses are recommended. The comparator should continuously be usual care. Again the greatest differences were shown in the recommendations on the measurement/sources of effects, discounting and in the analysis of sensitivity. The main difference regarding effects is the focus either on efficacy or effectiveness. Recommended discounting rates range from 1.5%-5% for effects and 3%-5% for costs whereby it is mostly recommended to use the same rate for costs and effects. With respect to the analysis of sensitivity the main difference is that oftentimes the probabilistic or deterministic approach is recommended

  4. Comparative evaluation of life cycle assessment models for solid waste management

    International Nuclear Information System (INIS)

    Winkler, Joerg; Bilitewski, Bernd

    2007-01-01

    This publication compares a selection of six different models developed in Europe and America by research organisations, industry associations and governmental institutions. The comparison of the models reveals the variations in the results and the differences in the conclusions of an LCA study done with these models. The models are compared by modelling a specific case - the waste management system of Dresden, Germany - with each model and an in-detail comparison of the life cycle inventory results. Moreover, a life cycle impact assessment shows if the LCA results of each model allows for comparable and consecutive conclusions, which do not contradict the conclusions derived from the other models' results. Furthermore, the influence of different level of detail in the life cycle inventory of the life cycle assessment is demonstrated. The model comparison revealed that the variations in the LCA results calculated by the models for the case show high variations and are not negligible. In some cases the high variations in results lead to contradictory conclusions concerning the environmental performance of the waste management processes. The static, linear modelling approach chosen by all models analysed is inappropriate for reflecting actual conditions. Moreover, it was found that although the models' approach to LCA is comparable on a general level, the level of detail implemented in the software tools is very different

  5. Comparative Analysis of the Effects of Organization Development ...

    African Journals Online (AJOL)

    Comparative Analysis of the Effects of Organization Development Interventions on Organizational Leadership and Management Practice: A Case Of Green Earth Program (GEP) ... Journal of Language, Technology & Entrepreneurship in Africa.

  6. Cost-Effectiveness Analysis of Stereotactic Body Radiation Therapy Compared With Radiofrequency Ablation for Inoperable Colorectal Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hayeon, E-mail: kimh2@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania (United States); Gill, Beant; Beriwal, Sushil; Huq, M. Saiful [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania (United States); Roberts, Mark S. [Department of Health Policy and Management, University of Pittsburgh School of Public Health, Pittsburgh, Pennsylvania (United States); Smith, Kenneth J. [Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania (United States)

    2016-07-15

    Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account for model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.

  7. Cost-Effectiveness Analysis of Stereotactic Body Radiation Therapy Compared With Radiofrequency Ablation for Inoperable Colorectal Liver Metastases

    International Nuclear Information System (INIS)

    Kim, Hayeon; Gill, Beant; Beriwal, Sushil; Huq, M. Saiful; Roberts, Mark S.; Smith, Kenneth J.

    2016-01-01

    Purpose: To conduct a cost-effectiveness analysis to determine whether stereotactic body radiation therapy (SBRT) is a cost-effective therapy compared with radiofrequency ablation (RFA) for patients with unresectable colorectal cancer (CRC) liver metastases. Methods and Materials: A cost-effectiveness analysis was conducted using a Markov model and 1-month cycle over a lifetime horizon. Transition probabilities, quality of life utilities, and costs associated with SBRT and RFA were captured in the model on the basis of a comprehensive literature review and Medicare reimbursements in 2014. Strategies were compared using the incremental cost-effectiveness ratio, with effectiveness measured in quality-adjusted life years (QALYs). To account for model uncertainty, 1-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay threshold of $100,000 per QALY gained. Results: In base case analysis, treatment costs for 3 fractions of SBRT and 1 RFA procedure were $13,000 and $4397, respectively. Median survival was assumed the same for both strategies (25 months). The SBRT costs $8202 more than RFA while gaining 0.05 QALYs, resulting in an incremental cost-effectiveness ratio of $164,660 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of median survival from both treatments. Stereotactic body radiation therapy was economically reasonable if better survival was presumed (>1 month gain) or if used for large tumors (>4 cm). Conclusions: If equal survival is assumed, SBRT is not cost-effective compared with RFA for inoperable colorectal liver metastases. However, if better local control leads to small survival gains with SBRT, this strategy becomes cost-effective. Ideally, these results should be confirmed with prospective comparative data.

  8. Development Of The Computer Code For Comparative Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Purwadi, Mohammad Dhandhang

    2001-01-01

    The qualitative and quantitative chemical analysis with Neutron Activation Analysis (NAA) is an importance utilization of a nuclear research reactor, and this should be accelerated and promoted in application and its development to raise the utilization of the reactor. The application of Comparative NAA technique in GA Siwabessy Multi Purpose Reactor (RSG-GAS) needs special (not commercially available yet) soft wares for analyzing the spectrum of multiple elements in the analysis at once. The application carried out using a single spectrum software analyzer, and comparing each result manually. This method really degrades the quality of the analysis significantly. To solve the problem, a computer code was designed and developed for comparative NAA. Spectrum analysis in the code is carried out using a non-linear fitting method. Before the spectrum analyzed, it was passed to the numerical filter which improves the signal to noise ratio to do the deconvolution operation. The software was developed using the G language and named as PASAN-K The testing result of the developed software was benchmark with the IAEA spectrum and well operated with less than 10 % deviation

  9. Comparative risk analysis for the Rocky Flats Plant integrated project planning

    International Nuclear Information System (INIS)

    Jones, M.E.; Shain, D.I.

    1994-01-01

    The Rocky Flats Plant is developing, with active stakeholder participation, a comprehensive planning strategy that will support transition of the Rocky Flats Plant from a nuclear weapons production facility to site cleanup and final disposition. Consideration of the interrelated nature of sitewide problems, such as material movement and disposition, facility and land use endstates, costs, relative risks to workers and the public, and waste disposition are all needed. Comparative Risk Analysis employs both incremental risk and cumulative risk evaluations to compare risks from postulated options or endstates and is an analytical tool for the Rocky Flats Plant Integrated Project Planning which can assist a decision-maker in evaluating relative risks among proposed remediation activity. However, risks from all of the remediation activities, decontamination and decommissioning activities, and normal ongoing operations are imposed upon the Rocky Flats workers, the surrounding public, and the environment. Comparative Risk Analysis will provide risk information, both human health and ecological, to aid in reducing unnecessary resource and monetary expenditures by focusing these resources on the largest risks first. Comparative Risk Analysis has been developed to aggregate various incremental risk estimates to develop a site cumulative risk estimate. The Comparative Risk Analysis methodology Group, consisting of community stakeholders, was established. Early stakeholder involvement in the risk analysis methodology development provides an opportunity for stakeholders to influence the risk information delivered to a decision-maker. This paper discusses development of the Comparative Risk Analysis methodology, stakeholder participation and lessons learned from these challenges

  10. Models of Economic Analysis

    OpenAIRE

    Adrian Ioana; Tiberiu Socaciu

    2013-01-01

    The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...

  11. Comparative Analysis of the Main Business Intelligence Solutions

    OpenAIRE

    Alexandra RUSANEANU

    2013-01-01

    Nowadays, Business Intelligence solutions are the main tools for analyzing and monitoring the company’s performance at any organizational level. This paper presents a comparative analysis of the most powerful Business Intelligence solutions using a set of technical features such as infrastructure of the platform, development facilities, complex analysis tools, interactive dashboards and scorecards, mobile integration and complex implementation of performance management methodologies.

  12. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  13. Genome-wide comparative analysis of four Indian Drosophila species.

    Science.gov (United States)

    Mohanty, Sujata; Khanna, Radhika

    2017-12-01

    Comparative analysis of multiple genomes of closely or distantly related Drosophila species undoubtedly creates excitement among evolutionary biologists in exploring the genomic changes with an ecology and evolutionary perspective. We present herewith the de novo assembled whole genome sequences of four Drosophila species, D. bipectinata, D. takahashii, D. biarmipes and D. nasuta of Indian origin using Next Generation Sequencing technology on an Illumina platform along with their detailed assembly statistics. The comparative genomics analysis, e.g. gene predictions and annotations, functional and orthogroup analysis of coding sequences and genome wide SNP distribution were performed. The whole genome of Zaprionus indianus of Indian origin published earlier by us and the genome sequences of previously sequenced 12 Drosophila species available in the NCBI database were included in the analysis. The present work is a part of our ongoing genomics project of Indian Drosophila species.

  14. Comparative genomic analysis of Brazilian Leptospira kirschneri serogroup Pomona serovar Mozdok

    Directory of Open Access Journals (Sweden)

    Luisa Z Moreno

    2016-08-01

    Full Text Available Leptospira kirschneri is one of the pathogenic species of the Leptospira genus. Human and animal infection from L. kirschneri gained further attention over the last few decades. Here we present the isolation and characterisation of Brazilian L. kirschneri serogroup Pomona serovar Mozdok strain M36/05 and the comparative genomic analysis with Brazilian human strain 61H. The M36/05 strain caused pulmonary hemorrhagic lesions in the hamster model, showing high virulence. The studied genomes presented high symmetrical identity and the in silico multilocus sequence typing analysis resulted in a new allelic profile (ST101 that so far has only been associated with the Brazilian L. kirschneri serogroup Pomona serovar Mozdok strains. Considering the environmental conditions and high genomic similarity observed between strains, we suggest the existence of a Brazilian L. kirschneri serogroup Pomona serovar Mozdok lineage that could represent a high public health risk; further studies are necessary to confirm the lineage significance and distribution.

  15. Comparative Economic Analysis of Beekeeping Using Traditional ...

    African Journals Online (AJOL)

    The study was carried out in Tabora and Katavi regions in the miombo woodlands of Tanzania. The overall objective of the study was to undertake a comparative economic analysis of beekeeping using improved or traditional beehives. Data were collected from 198 beekeepers that were randomly selected from a sampling ...

  16. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  17. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  18. Modeling and analysis of advanced binary cycles

    Energy Technology Data Exchange (ETDEWEB)

    Gawlik, K.

    1997-12-31

    A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.

  19. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  20. Development of the tube bundle structure for fluid-structure interaction analysis model

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kim, Jae Yong

    2010-02-01

    Tube bundle structures within a Boiler or heat exchanger are laid the fluid-structure, thermal-structure and fluid-thermal-structure coupled boundary condition. In these complicated boundary conditions, Fluid-structure interaction (FSI) occurs when fluid flow causes deformation of the structure. This deformation, in turn, changes the boundary conditions for the fluid flow. The structural analysis discipline, and then independently analyzed each other. However, the fluid dynamic force effect the behavior of the structure, and the vibration amplitude of the structure to fluid. FSI analysis model was separately created fluid and structure model, and then defined the fsi boundary condition, and simultaneously analyzed in one domain. The analysis results were compared with those of the experimental method for validating the analysis model. Flow-induced vibration test was executed with single rod configuration. The vibration amplitudes of a fuel rod were measured by the laser vibro-meter system in x and y-direction. The analyses results were not closely with the test data, but the trend was very similar with the test result. In fsi coupled analysis case, the turbulent model was very important with the reliability of the accuracy of the analysis model. Therefore, the analysis model will be needed to further study

  1. The comparative kinetic analysis of Acetocell and Lignoboost® lignin pyrolysis: the estimation of the distributed reactivity models.

    Science.gov (United States)

    Janković, Bojan

    2011-10-01

    The non-isothermal pyrolysis kinetics of Acetocell (the organosolv) and Lignoboost® (kraft) lignins, in an inert atmosphere, have been studied by thermogravimetric analysis. Using isoconversional analysis, it was concluded that the apparent activation energy for all lignins strongly depends on conversion, showing that the pyrolysis of lignins is not a single chemical process. It was identified that the pyrolysis process of Acetocell and Lignoboost® lignin takes place over three reaction steps, which was confirmed by appearance of the corresponding isokinetic relationships (IKR). It was found that major pyrolysis stage of both lignins is characterized by stilbene pyrolysis reactions, which were subsequently followed by decomposition reactions of products derived from the stilbene pyrolytic process. It was concluded that non-isothermal pyrolysis of Acetocell and Lignoboost® lignins can be best described by n-th (n>1) reaction order kinetics, using the Weibull mixture model (as distributed reactivity model) with alternating shape parameters. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. BGI-RIS: an integrated information resource and comparative analysis workbench for rice genomics

    DEFF Research Database (Denmark)

    Zhao, Wenming; Wang, Jing; He, Ximiao

    2004-01-01

    Rice is a major food staple for the world's population and serves as a model species in cereal genome research. The Beijing Genomics Institute (BGI) has long been devoting itself to sequencing, information analysis and biological research of the rice and other crop genomes. In order to facilitate....... Designed as a basic platform, BGI-RIS presents the sequenced genomes and related information in systematic and graphical ways for the convenience of in-depth comparative studies (http://rise.genomics.org.cn/). Udgivelsesdato: 2004-Jan-1...

  3. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  4. Evaluating the risks of clinical research: direct comparative analysis.

    Science.gov (United States)

    Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David

    2014-09-01

    Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.

  5. [Cost-effectiveness analysis of etanercept compared with other biologic therapies in the treatment of rheumatoid arthritis].

    Science.gov (United States)

    Salinas-Escudero, Guillermo; Vargas-Valencia, Juan; García-García, Erika Gabriela; Munciño-Ortega, Emilio; Galindo-Suárez, Rosa María

    2013-01-01

    to conduct cost-effectiveness analysis of etanercept compared with other biologic therapies in the treatment of moderate or severe rheumatoid arthritis in patients with previous unresponse to immune selective anti-inflammatory derivatives failure. a pharmacoeconomic model based on decision analysis to assess the clinical outcome after giving etanercept, infliximab, adalimumab or tocilizumab to treat moderate or severe rheumatoid arthritis was employed. Effectiveness of medications was assessed with improvement rates of 20 % or 70 % of the parameters established by the American College of Rheumatology (ACR 20 and ACR 70). the model showed that etanercept had the most effective therapeutic response rate: 79.7 % for ACR 20 and 31.4 % for ACR 70, compared with the response to other treatments. Also, etanercept had the lowest cost ($149,629.10 per patient) and had the most cost-effective average ($187,740.40 for clinical success for ACR 20 and $476,525.80 for clinical success for ACR 70) than the other biologic therapies. we demonstrated that treatment with etanercept is more effective and less expensive compared to the other drugs, thus making it more efficient therapeutic option both in terms of means and incremental cost-effectiveness ratios for the treatment of rheumatoid arthritis.

  6. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  7. The Elements of Competitive Environment of an Enterprise: A Case of Oligopolic Markets Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Algirdas Krivka

    2011-03-01

    Full Text Available The article raises the problem of the complex analysis of competitive environment of an enterprise, which is considered to be the main source of factors, influencing enterprise‘s strategic behaviour and performance. The elements of competitive environment are derived from “traditional” market structure characteristics, developed by the scholars of classical economics and modern microeconomics, with additional factors coming from industrial organization, theoretical oligopoly models, M. Porter’s five competitive forces and diamond. The developed set of the elements of competitive environment is applied for the comparative analysis of three Lithuanian oligopolic markets. The results obtained confirm the potential for practical application of the developed classification for similar analysis.Article in Lithuanian

  8. Computer model verification for seismic analysis of vertical pumps and motors

    International Nuclear Information System (INIS)

    McDonald, C.K.

    1993-01-01

    The general principles of modeling vertical pumps and motors are discussed and then two examples of verifying the models are presented in detail. The first examples is a vertical pump and motor assembly. The model and computer analysis are presented and the first four modes (frequencies) calculated are compared to the values of the same modes obtained from a shaker test. The model used for this example is a lumped mass connected by massless beams model. The shaker test was performed by National Technical Services, Los Angeles, CA. The second example is a larger vertical motor. The model used for this example is a finite element three dimensional shell model. The first frequency obtained from this model is compared to the first frequency obtained from shop tests for several different motors. The shop tests were performed by Reliance Electric, Stratford, Ontario and Siemens-Allis, Inc., Norwood, Ohio

  9. Bluetooth security attacks comparative analysis, attacks, and countermeasures

    CERN Document Server

    Haataja, Keijo; Pasanen, Sanna; Toivanen, Pekka

    2013-01-01

    This overview of Bluetooth security examines network vulnerabilities and offers a comparative analysis of recent security attacks. It also examines related countermeasures and proposes a novel attack that works against all existing Bluetooth versions.

  10. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  11. Cost Utility Analysis of Topical Steroids Compared With Dietary Elimination for Treatment of Eosinophilic Esophagitis.

    Science.gov (United States)

    Cotton, Cary C; Erim, Daniel; Eluri, Swathi; Palmer, Sarah H; Green, Daniel J; Wolf, W Asher; Runge, Thomas M; Wheeler, Stephanie; Shaheen, Nicholas J; Dellon, Evan S

    2017-06-01

    Topical corticosteroids or dietary elimination are recommended as first-line therapies for eosinophilic esophagitis, but data to directly compare these therapies are scant. We performed a cost utility comparison of topical corticosteroids and the 6-food elimination diet (SFED) in treatment of eosinophilic esophagitis, from the payer perspective. We used a modified Markov model based on current clinical guidelines, in which transition between states depended on histologic response simulated at the individual cohort-member level. Simulation parameters were defined by systematic review and meta-analysis to determine the base-case estimates and bounds of uncertainty for sensitivity analysis. Meta-regression models included adjustment for differences in study and cohort characteristics. In the base-case scenario, topical fluticasone was about as effective as SFED but more expensive at a 5-year time horizon ($9261.58 vs $5719.72 per person). SFED was more effective and less expensive than topical fluticasone and topical budesonide in the base-case scenario. Probabilistic sensitivity analysis revealed little uncertainty in relative treatment effectiveness. There was somewhat greater uncertainty in the relative cost of treatments; most simulations found SFED to be less expensive. In a cost utility analysis comparing topical corticosteroids and SFED for first-line treatment of eosinophilic esophagitis, the therapies were similar in effectiveness. SFED was on average less expensive, and more cost effective in most simulations, than topical budesonide and topical fluticasone, from a payer perspective and not accounting for patient-level costs or quality of life. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  12. Comparing soil moisture memory in satellite observations and models

    Science.gov (United States)

    Stacke, Tobias; Hagemann, Stefan; Loew, Alexander

    2013-04-01

    A major obstacle to a correct parametrization of soil processes in large scale global land surface models is the lack of long term soil moisture observations for large parts of the globe. Currently, a compilation of soil moisture data derived from a range of satellites is released by the ESA Climate Change Initiative (ECV_SM). Comprising the period from 1978 until 2010, it provides the opportunity to compute climatological relevant statistics on a quasi-global scale and to compare these to the output of climate models. Our study is focused on the investigation of soil moisture memory in satellite observations and models. As a proxy for memory we compute the autocorrelation length (ACL) of the available satellite data and the uppermost soil layer of the models. Additional to the ECV_SM data, AMSR-E soil moisture is used as observational estimate. Simulated soil moisture fields are taken from ERA-Interim reanalysis and generated with the land surface model JSBACH, which was driven with quasi-observational meteorological forcing data. The satellite data show ACLs between one week and one month for the greater part of the land surface while the models simulate a longer memory of up to two months. Some pattern are similar in models and observations, e.g. a longer memory in the Sahel Zone and the Arabian Peninsula, but the models are not able to reproduce regions with a very short ACL of just a few days. If the long term seasonality is subtracted from the data the memory is strongly shortened, indicating the importance of seasonal variations for the memory in most regions. Furthermore, we analyze the change of soil moisture memory in the different soil layers of the models to investigate to which extent the surface soil moisture includes information about the whole soil column. A first analysis reveals that the ACL is increasing for deeper layers. However, its increase is stronger in the soil moisture anomaly than in its absolute values and the first even exceeds the

  13. Uncertainty analysis in agent-based modelling and consequential life cycle assessment coupled models : a critical review

    NARCIS (Netherlands)

    Baustert, P.M.; Benetto, E.

    2017-01-01

    The evolution of life cycle assessment (LCA) from a merely comparative tool for the assessment of products to a policy analysis tool proceeds by incorporating increasingly complex modelling approaches. In more recent studies of complex systems, such as the agriculture sector or mobility, agent-based

  14. MODELING OF THE PROCESS OF FORMATION OF INDIVIDUAL MARKETING DEMAND: A COMPARATIVE ANALYSIS AND GENERALIZATION OF THE PRECEDING CORRESPONDING RESULTS

    Directory of Open Access Journals (Sweden)

    Anatoly V. Korotkov

    2015-01-01

    Full Text Available The article focuses on the modeling of series-STI formation of individual market demand. The analysis, and then sum-of three well-known inmarketing models, which exhaust the currentlyknown approaches is revised. The article shows that all three models have a signifi cant difference in the number of stages and terminology. The obtained results are the basis for the developmentof the author’s model of gradual development of demand - «need - desire - requirement -demand» or abbreviated as «model NDRD» and can be considered as a contribution to the methodology of study a demand.

  15. Comparative analysis of the mitochondrial genomes in gastropods

    International Nuclear Information System (INIS)

    Arquez, Moises; Uribe, Juan Esteban; Castro, Lyda Raquel

    2012-01-01

    In this work we presented a comparative analysis of the mitochondrial genomes in gastropods. Nucleotide and amino acids composition was calculated and a comparative visual analysis of the start and termination codons was performed. The organization of the genome was compared calculating the number of intergenic sequences, the location of the genes and the number of reorganized genes (breakpoints) in comparison with the sequence that is presumed to be ancestral for the group. In order to calculate variations in the rates of molecular evolution within the group, the relative rate test was performed. In spite of the differences in the size of the genomes, the amino acids number is conserved. The nucleotide and amino acid composition is similar between Vetigastropoda, Ceanogastropoda and Neritimorpha in comparison to Heterobranchia and Patellogastropoda. The mitochondrial genomes of the group are very compact with few intergenic sequences, the only exception is the genome of Patellogastropoda with 26,828 bp. Start codons of the Heterobranchia and Patellogastropoda are very variable and there is also an increase in genome rearrangements for these two groups. Generally, the hypothesis of constant rates of molecular evolution between the groups is rejected, except when the genomes of Caenogastropoda and Vetigastropoda are compared.

  16. Comparative analysis of technical efficiencies between compound ...

    African Journals Online (AJOL)

    This study was designed to compare the level of technical efficiency in the compound and non compound farms in Imo state. A multi-stage random sampling technique was used to select 120 food crop farmers from two out of the three agricultural zones in Imo state. Using the Chow (1960) analysis of covariance technique ...

  17. Comparative international management of human resources and human resources management in Brazil: An analysis in view of the calculative and collaborative models

    Directory of Open Access Journals (Sweden)

    Tatiani dos Santos Zuppani

    2016-09-01

    Full Text Available The aim of this study was to analyze the adoption of calculative and collaborative practices dominating comparative international human resources management, according to the different profiles of the areas of Human Resources Management (HRM of private organizations operating in Brazil. The method employed was a Survey, operated by means of an electronic questionnaire on HRM practices and organizational characteristics. A total of 326 respondents was obtained. Initially a cluster was conducted, in which respondents were clustered into four groups with different HRM profiles. The use of calculative and collaborative practices was compared in the four groups formed through the ANOVA (analysis of variance collection of statistical models. The main findings showed that the strategic group was the one with the highest average of adoption of calculative and collaborative practices. The Communicative HRM group showed a higher propensity to collaborative practices and the Formalized HRM group would adopt calculative practices, although none of the groups showed an average of adoption than the Strategic HRM group. This suggests that it is necessary to learn how to deal with different aspects of the management of people in organizations operating in Brazil.

  18. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Science.gov (United States)

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  19. Comparative Analysis of Thermohydraulic Margins in Embalse Power Station, CARA Vs. CANDU with Cobra IV-HW

    International Nuclear Information System (INIS)

    Daverio, H; Juanico, L

    2000-01-01

    Comparative analysis of thermohydraulic margins were studied of the CANDU 37 and CARA fuel bundles (FB) in Embalse power station with COBRA IV-HW code ., the geometry of the bundle laying on the channel was particularly modeled and discussing the results in comparison with former calculations with 1/6 simetry .The CARA design with enriched uranium (0.9 %) and extended burn up lets maintain the current thermohydraulic nominal margins , while compared with CANDU 37 rods FB enriched , the CARA design permits widely improve the current margins

  20. Cost-Effectiveness Analysis Comparing Pre-Diagnosis Autism Spectrum Disorder (ASD)-Targeted Intervention with Ontario's Autism Intervention Program

    Science.gov (United States)

    Penner, Melanie; Rayar, Meera; Bashir, Naazish; Roberts, S. Wendy; Hancock-Howard, Rebecca L.; Coyte, Peter C.

    2015-01-01

    Novel management strategies for autism spectrum disorder (ASD) propose providing interventions before diagnosis. We performed a cost-effectiveness analysis comparing the costs and dependency-free life years (DFLYs) generated by pre-diagnosis intensive Early Start Denver Model (ESDM-I); pre-diagnosis parent-delivered ESDM (ESDM-PD); and the Ontario…

  1. A comparative study of time series modeling methods for reactor noise analysis

    International Nuclear Information System (INIS)

    Kitamura, Masaharu; Shigeno, Kei; Sugiyama, Kazusuke

    1978-01-01

    Two modeling algorithms were developed to study at-power reactor noise as a multi-input, multi-output process. A class of linear, discrete time description named autoregressive-moving average model was used as a compact mathematical expression of the objective process. One of the model estimation (modeling) algorithms is based on the theory of Kalman filtering, and the other on a conjugate gradient method. By introducing some modifications in the formulation of the problem, realization of the practically usable algorithms was made feasible. Through the testing with several simulation models, reliability and effectiveness of these algorithms were confirmed. By applying these algorithms to experimental data obtained from a nuclear power plant, interesting knowledge about the at-power reactor noise was found out. (author)

  2. Comparative Analysis on Chemical Composition of Bentonite Clays ...

    African Journals Online (AJOL)

    2017-09-12

    Sep 12, 2017 ... Comparative Analysis on Chemical Composition of Bentonite Clays. Obtained from Ashaka and ... versatile material for geotechnical engineering and as well as their demand for ..... A PhD thesis submitted to the Chemical ...

  3. Comparative Assessment of Nonlocal Continuum Solvent Models Exhibiting Overscreening

    Directory of Open Access Journals (Sweden)

    Ren Baihua

    2017-01-01

    Full Text Available Nonlocal continua have been proposed to offer a more realistic model for the electrostatic response of solutions such as the electrolyte solvents prominent in biology and electrochemistry. In this work, we review three nonlocal models based on the Landau-Ginzburg framework which have been proposed but not directly compared previously, due to different expressions of the nonlocal constitutive relationship. To understand the relationships between these models and the underlying physical insights from which they are derive, we situate these models into a single, unified Landau-Ginzburg framework. One of the models offers the capacity to interpret how temperature changes affect dielectric response, and we note that the variations with temperature are qualitatively reasonable even though predictions at ambient temperatures are not quantitatively in agreement with experiment. Two of these models correctly reproduce overscreening (oscillations between positive and negative polarization charge densities, and we observe small differences between them when we simulate the potential between parallel plates held at constant potential. These computations require reformulating the two models as coupled systems of local partial differential equations (PDEs, and we use spectral methods to discretize both problems. We propose further assessments to discriminate between the models, particularly in regards to establishing boundary conditions and comparing to explicit-solvent molecular dynamics simulations.

  4. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  5. A receptor model for urban aerosols based on oblique factor analysis

    DEFF Research Database (Denmark)

    Keiding, Kristian; Sørensen, Morten S.; Pind, Niels

    1987-01-01

    A procedure is outlined for the construction of receptor models of urban aerosols, based on factor analysis. The advantage of the procedure is that the covariation of source impacts is included in the construction of the models. The results are compared with results obtained by other receptor......-modelling procedures. It was found that procedures based on correlating sources were physically sound as well as in mutual agreement. Procedures based on non-correlating sources were found to generate physically obscure models....

  6. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  7. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...

  8. Mind and consciousness in yoga - Vedanta: A comparative analysis with western psychological concepts.

    Science.gov (United States)

    Prabhu, H R Aravinda; Bhat, P S

    2013-01-01

    Study of mind and consciousness through established scientific methods is often difficult due to the observed-observer dichotomy. Cartesian approach of dualism considering the mind and matter as two diverse and unconnected entities has been questioned by oriental schools of Yoga and Vedanta as well as the recent quantum theories of modern physics. Freudian and Neo-freudian schools based on the Cartesian model have been criticized by the humanistic schools which come much closer to the vedantic approach of unitariness. A comparative analysis of the two approaches is discussed.

  9. MATHEMATICAL RISK ANALYSIS: VIA NICHOLAS RISK MODEL AND BAYESIAN ANALYSIS

    Directory of Open Access Journals (Sweden)

    Anass BAYAGA

    2010-07-01

    Full Text Available The objective of this second part of a two-phased study was to explorethe predictive power of quantitative risk analysis (QRA method andprocess within Higher Education Institution (HEI. The method and process investigated the use impact analysis via Nicholas risk model and Bayesian analysis, with a sample of hundred (100 risk analysts in a historically black South African University in the greater Eastern Cape Province.The first findings supported and confirmed previous literature (KingIII report, 2009: Nicholas and Steyn, 2008: Stoney, 2007: COSA, 2004 that there was a direct relationship between risk factor, its likelihood and impact, certiris paribus. The second finding in relation to either controlling the likelihood or the impact of occurrence of risk (Nicholas risk model was that to have a brighter risk reward, it was important to control the likelihood ofoccurrence of risks as compared with its impact so to have a direct effect on entire University. On the Bayesian analysis, thus third finding, the impact of risk should be predicted along three aspects. These aspects included the human impact (decisions made, the property impact (students and infrastructural based and the business impact. Lastly, the study revealed that although in most business cases, where as business cycles considerably vary dependingon the industry and or the institution, this study revealed that, most impacts in HEI (University was within the period of one academic.The recommendation was that application of quantitative risk analysisshould be related to current legislative framework that affects HEI.

  10. Critical Factors Analysis for Offshore Software Development Success by Structural Equation Modeling

    Science.gov (United States)

    Wada, Yoshihisa; Tsuji, Hiroshi

    In order to analyze the success/failure factors in offshore software development service by the structural equation modeling, this paper proposes to follow two approaches together; domain knowledge based heuristic analysis and factor analysis based rational analysis. The former works for generating and verifying of hypothesis to find factors and causalities. The latter works for verifying factors introduced by theory to build the model without heuristics. Following the proposed combined approaches for the responses from skilled project managers of the questionnaire, this paper found that the vendor property has high causality for the success compared to software property and project property.

  11. Comparative analysis of magnetic resonance in the polaron pair recombination and the triplet exciton-polaron quenching models

    Science.gov (United States)

    Mkhitaryan, V. V.; Danilović, D.; Hippola, C.; Raikh, M. E.; Shinar, J.

    2018-01-01

    We present a comparative theoretical study of magnetic resonance within the polaron pair recombination (PPR) and the triplet exciton-polaron quenching (TPQ) models. Both models have been invoked to interpret the photoluminescence detected magnetic resonance (PLDMR) results in π -conjugated materials and devices. We show that resonance line shapes calculated within the two models differ dramatically in several regards. First, in the PPR model, the line shape exhibits unusual behavior upon increasing the microwave power: it evolves from fully positive at weak power to fully negative at strong power. In contrast, in the TPQ model, the PLDMR is completely positive, showing a monotonic saturation. Second, the two models predict different dependencies of the resonance signal on the photoexcitation power, PL. At low PL, the resonance amplitude Δ I /I is ∝PL within the PPR model, while it is ∝PL2 crossing over to PL3 within the TPQ model. On the physical level, the differences stem from different underlying spin dynamics. Most prominently, a negative resonance within the PPR model has its origin in the microwave-induced spin-Dicke effect, leading to the resonant quenching of photoluminescence. The spin-Dicke effect results from the spin-selective recombination, leading to a highly correlated precession of the on-resonance pair partners under the strong microwave power. This effect is not relevant for TPQ mechanism, where the strong zero-field splitting renders the majority of triplets off resonance. On the technical level, the analytical evaluation of the line shapes for the two models is enabled by the fact that these shapes can be expressed via the eigenvalues of a complex Hamiltonian. This bypasses the necessity of solving the much larger complex linear system of the stochastic Liouville equations. Our findings pave the way towards a reliable discrimination between the two mechanisms via cw PLDMR.

  12. Application of mass-spring model in seismic analysis of liquid storage tank

    International Nuclear Information System (INIS)

    Liu Jiayi; Bai Xinran; Li Xiaoxuan

    2013-01-01

    There are many tanks for storing liquid in nuclear power plant. When seismic analysis is performed, swaying of liquid may change the mechanical parameters of those tanks, such as the center of mass and the moment of inertia, etc., so the load due to swaying of liquid can't be neglected. Mass-spring model is a simplified model to calculate the dynamic pressure of liquid in tank under earthquake, which is derived by the theory of Housner and given in the specification of seismic analysis of Safety-Related Nuclear Structures and Commentary-4-98 (ASCE-4-98 for short hereinafter). According to the theory of Housner and ASCE-4-98, the mass-spring 3-D FEM model for storage tank and liquid in it was established, by which the force of stored liquid acted on liquid storage tank in nuclear power plant under horizontal seismic load was calculated. The calculated frequency of liquid swaying and effect of liquid convection on storage tank were compared with those calculated by simplified formula. It is shown that the results of 3-D FEM model are reasonable and reliable. Further more, it is more direct and convenient compared with description in ASCE-4-98 when the mass-spring model is applied to 3-D FEM model for seismic analysis, from which the displacement and stress distributions of the plate-shell elements or the 3-D solid finite elements can be obtained directly from the seismic input model. (authors)

  13. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  14. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  15. The Constant Comparative Analysis Method Outside of Grounded Theory

    Science.gov (United States)

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  16. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  17. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  18. A Comparative Analysis of Social Media Marketing by Transportation Network Companies in the Sharing Economy

    OpenAIRE

    Heymans, Alice

    2017-01-01

    The sharing economy is a new business model rapidly expanding. In transportation, many people use innovative services proposed by ride-hailing mobile applications. These technological platforms, operated by networking companies, rely extensively on social media to promote their services, and reach new customers (riders) and providers (drivers). This dissertation focuses on e-marketing communication. It makes a comparative analysis of the information published on several social media (Facebook...

  19. Comparative analysis of direct and indirect property investment ...

    African Journals Online (AJOL)

    Comparative analysis of direct and indirect property investment returns in Abuja. ... in property shares is more risky than commercial property due to the risk ... of the stock market, it was discovered that there is a strong positive relationship ...

  20. Comparative study between 2 methods of mounting models in semiadjustable articulator for orthognathic surgery.

    Science.gov (United States)

    Mayrink, Gabriela; Sawazaki, Renato; Asprino, Luciana; de Moraes, Márcio; Fernandes Moreira, Roger William

    2011-11-01

    Compare the traditional method of mounting dental casts on a semiadjustable articulator and the new method suggested by Wolford and Galiano, 1 analyzing the inclination of maxillary occlusal plane in relation to FHP. Two casts of 10 patients were obtained. One of them was used for mounting of models on a traditional articulator, by using a face bow transfer system and the other one was used to mounting models at Occlusal Plane Indicator platform (OPI), using the SAM articulator. After that, na analysis of the accuracy of mounting models was performed. The angle made by de occlusal plane and FHP on the cephalogram should be equal the angle between the occlusal plane and the upper member of the articulator. The measures were tabulated in Microsoft Excell(®) and calculated using a 1-way analysis variance. Statistically, the results did not reveal significant differences among the measures. OPI and face bow presents similar results but more studies are needed to verify its accuracy relative to the maxillary cant in OPI or develop new techniques able to solve the disadvantages of each technique. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  2. Comparative analysis of chicken chromosome 28 provides new clues to the evolutionary fragility of gene-rich vertebrate regions

    NARCIS (Netherlands)

    Gordon, L.; Yang, S.; Tran-Gyamfi, M.; Baggott, D.; Christensen, M.; Hamilton, A.; Crooijmans, R.P.M.A.; Groenen, M.A.M.; Lucas, S.; Ovcharenko, I.; Stubbs, L.

    2007-01-01

    The chicken genome draft sequence has provided a valuable resource for studies of an important agricultural and experimental model species and an important data set for comparative analysis. However, some of the most gene-rich segments are missing from chicken genome draft assemblies, limiting the

  3. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  4. Comparing Realistic Subthalamic Nucleus Neuron Models

    Science.gov (United States)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  5. Development of local TDC model in core thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Kwon, H.S.; Park, J.R.; Hwang, D.H.; Lee, S.K.

    2004-01-01

    The local TDC model consisting of natural mixing and forced mixing part was developed to obtain more realistic local fluid properties in the core subchannel analysis. To evaluate the performance of local TDC model, the CHF prediction capability was tested with the various CHF correlations and local fluid properties at CHF location which are based on the local TDC model. The results show that the standard deviation of measured to predicted CHF ratio (M/P) based on local TDC model can be reduced by about 7% compared to those based on global TDC model when the CHF correlation has no term to account for distance from the spacer grid. (author)

  6. Comparing the costs of three prostate cancer follow-up strategies: a cost minimisation analysis.

    Science.gov (United States)

    Pearce, Alison M; Ryan, Fay; Drummond, Frances J; Thomas, Audrey Alforque; Timmons, Aileen; Sharp, Linda

    2016-02-01

    Prostate cancer follow-up is traditionally provided by clinicians in a hospital setting. Growing numbers of prostate cancer survivors mean that this model of care may not be economically sustainable, and a number of alternative approaches have been suggested. The aim of this study was to develop an economic model to compare the costs of three alternative strategies for prostate cancer follow-up in Ireland-the European Association of Urology (EAU) guidelines, the National Institute of Health Care Excellence (NICE) guidelines and current practice. A cost minimisation analysis was performed using a Markov model with three arms (EAU guidelines, NICE guidelines and current practice) comparing follow-up for men with prostate cancer treated with curative intent. The model took a health care payer's perspective over a 10-year time horizon. Current practice was the least cost efficient arm of the model, the NICE guidelines were most cost efficient (74 % of current practice costs) and the EAU guidelines intermediate (92 % of current practice costs). For the 2562 new cases of prostate cancer diagnosed in 2009, the Irish health care system could have saved €760,000 over a 10-year period if the NICE guidelines were adopted. This is the first study investigating costs of prostate cancer follow-up in the Irish setting. While economic models are designed as a simplification of complex real-world situations, these results suggest potential for significant savings within the Irish health care system associated with implementation of alternative models of prostate cancer follow-up care.

  7. Fluid Petri Nets and hybrid model-checking: a comparative case study

    International Nuclear Information System (INIS)

    Gribaudo, M.; Horvath, A.; Bobbio, A.; Tronci, E.; Ciancamerla, E.; Minichino, M.

    2003-01-01

    The modeling and analysis of hybrid systems is a recent and challenging research area which is actually dominated by two main lines: a functional analysis based on the description of the system in terms of discrete state (hybrid) automata (whose goal is to ascertain conformity and reachability properties), and a stochastic analysis (whose aim is to provide performance and dependability measures). This paper investigates a unifying view between formal methods and stochastic methods by proposing an analysis methodology of hybrid systems based on Fluid Petri Nets (FPNs). FPNs can be analyzed directly using appropriate tools. Our paper shows that the same FPN model can be fed to different functional analyzers for model checking. In order to extensively explore the capability of the technique, we have converted the original FPN into languages for discrete as well as hybrid as well as stochastic model checkers. In this way, a first comparison among the modeling power of well known tools can be carried out. Our approach is illustrated by means of a 'real world' hybrid system: the temperature control system of a co-generative plant

  8. A new method for comparing rankings through complex networks: Model and analysis of competitiveness of major European soccer leagues

    Science.gov (United States)

    Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel

    2013-12-01

    In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance.

  9. Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves

    Science.gov (United States)

    Wattanakasiwich, P.; Ananta, S.

    2010-07-01

    In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.

  10. Transcriptomics and comparative analysis of three antarctic notothenioid fishes.

    Directory of Open Access Journals (Sweden)

    Seung Chul Shin

    Full Text Available For the past 10 to 13 million years, Antarctic notothenioid fish have undergone extraordinary periods of evolution and have adapted to a cold and highly oxygenated Antarctic marine environment. While these species are considered an attractive model with which to study physiology and evolutionary adaptation, they are poorly characterized at the molecular level, and sequence information is lacking. The transcriptomes of the Antarctic fishes Notothenia coriiceps, Chaenocephalus aceratus, and Pleuragramma antarcticum were obtained by 454 FLX Titanium sequencing of a normalized cDNA library. More than 1,900,000 reads were assembled in a total of 71,539 contigs. Overall, 40% of the contigs were annotated based on similarity to known protein or nucleotide sequences, and more than 50% of the predicted transcripts were validated as full-length or putative full-length cDNAs. These three Antarctic fishes shared 663 genes expressed in the brain and 1,557 genes expressed in the liver. In addition, these cold-adapted fish expressed more Ub-conjugated proteins compared to temperate fish; Ub-conjugated proteins are involved in maintaining proteins in their native state in the cold and thermally stable Antarctic environments. Our transcriptome analysis of Antarctic notothenioid fish provides an archive for future studies in molecular mechanisms of fundamental genetic questions, and can be used in evolution studies comparing other fish.

  11. Religious Education in Russia: A Comparative and Critical Analysis

    Science.gov (United States)

    Blinkova, Alexandra; Vermeer, Paul

    2018-01-01

    RE in Russia has been recently introduced as a compulsory regular school subject during the last year of elementary school. The present study offers a critical analysis of the current practice of Russian RE by comparing it with RE in Sweden, Denmark and Britain. This analysis shows that Russian RE is ambivalent. Although it is based on a…

  12. Multiscale Signal Analysis and Modeling

    CERN Document Server

    Zayed, Ahmed

    2013-01-01

    Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...

  13. Greenfields and acquisitions: a comparative analysis

    Directory of Open Access Journals (Sweden)

    Nicolae MARINESCU

    2016-07-01

    Full Text Available This paper compares greenfields and acquisitions as foreign direct investment (FDI alternatives used by transnational corporations (TNCs. First, the determinants leading to the choice of companies between the two modes of entry into a foreign market are laid out. Then, specific features of each alternative are highlighted, by contrasting the advantages and disadvantages of both types of FDI. Based on this analysis, some conclusions are drawn in the end concerning the most important factors that influence the decision of a company whether to choose a greenfield investment or an acquisition.

  14. Computer Models for IRIS Control System Transient Analysis

    International Nuclear Information System (INIS)

    Gary D Storrick; Bojan Petrovic; Luca Oriani

    2007-01-01

    This report presents results of the Westinghouse work performed under Task 3 of this Financial Assistance Award and it satisfies a Level 2 Milestone for the project. Task 3 of the collaborative effort between ORNL, Brazil and Westinghouse for the International Nuclear Energy Research Initiative entitled 'Development of Advanced Instrumentation and Control for an Integrated Primary System Reactor' focuses on developing computer models for transient analysis. This report summarizes the work performed under Task 3 on developing control system models. The present state of the IRIS plant design--such as the lack of a detailed secondary system or I and C system designs--makes finalizing models impossible at this time. However, this did not prevent making considerable progress. Westinghouse has several working models in use to further the IRIS design. We expect to continue modifying the models to incorporate the latest design information until the final IRIS unit becomes operational. Section 1.2 outlines the scope of this report. Section 2 describes the approaches we are using for non-safety transient models. It describes the need for non-safety transient analysis and the model characteristics needed to support those analyses. Section 3 presents the RELAP5 model. This is the highest-fidelity model used for benchmark evaluations. However, it is prohibitively slow for routine evaluations and additional lower-fidelity models have been developed. Section 4 discusses the current Matlab/Simulink model. This is a low-fidelity, high-speed model used to quickly evaluate and compare competing control and protection concepts. Section 5 describes the Modelica models developed by POLIMI and Westinghouse. The object-oriented Modelica language provides convenient mechanisms for developing models at several levels of detail. We have used this to develop a high-fidelity model for detailed analyses and a faster-running simplified model to help speed the I and C development process. Section

  15. Current, voltage and temperature distribution modeling of light-emitting diodes based on electrical and thermal circuit analysis

    International Nuclear Information System (INIS)

    Yun, J; Shim, J-I; Shin, D-S

    2013-01-01

    We demonstrate a modeling method based on the three-dimensional electrical and thermal circuit analysis to extract current, voltage and temperature distributions of light-emitting diodes (LEDs). In our model, the electrical circuit analysis is performed first to extract the current and voltage distributions in the LED. Utilizing the result obtained from the electrical circuit analysis as distributed heat sources, the thermal circuit is set up by using the duality between Fourier's law and Ohm's law. From the analysis of the thermal circuit, the temperature distribution at each epitaxial film is successfully obtained. Comparisons of experimental and simulation results are made by employing an InGaN/GaN multiple-quantum-well blue LED. Validity of the electrical circuit analysis is confirmed by comparing the light distribution at the surface. Since the temperature distribution at each epitaxial film cannot be obtained experimentally, the apparent temperature distribution is compared at the surface of the LED chip. Also, experimentally obtained average junction temperature is compared with the value calculated from the modeling, yielding a very good agreement. The analysis method based on the circuit modeling has an advantage of taking distributed heat sources as inputs, which is essential for high-power devices with significant self-heating. (paper)

  16. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  17. Comparing several boson mappings with the shell model

    International Nuclear Information System (INIS)

    Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.

    1990-01-01

    Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases

  18. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  19. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  20. A finite element model for the stress and flexibility analysis of curved pipes

    International Nuclear Information System (INIS)

    Guerreiro, J.N.C.

    1987-03-01

    We present a finite element model for the analysis of pipe bends with flanged ends or flanged tangents. Comments are made on the consideration of the internal pressure load. Flexibility and stress instensification factores obtained with the present model are compared with others available. (Author) [pt

  1. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  2. Signal analysis of accelerometry data using gravity-based modeling

    Science.gov (United States)

    Davey, Neil P.; James, Daniel A.; Anderson, Megan E.

    2004-03-01

    Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.

  3. Comparative Analysis of Virtual Education Applications

    Directory of Open Access Journals (Sweden)

    Mehmet KURT

    2006-10-01

    Full Text Available The research was conducted in order to make comparative analysis of virtual education applications. The research is conducted in survey model. The study group consists of total 300 institutes providing virtual education in the fall, spring and summer semesters of 2004; 246 in USA, 10 in Australia, 3 in South Africa, 10 in India, 21 in UK, 6 in Japan, 4 in Turkey. The information has been collected by online questionnaire sent to the target mass by e-mail. The questionnaire has been developed in two information categories as personal information and institutes and their virtual education applications. The English web design of the online questionnaire and the database has been prepared by Microsoft ASP codes which is the script language of Microsoft Front Page editor and has been tested on personal web site. The questionnaire has been pre applied in institutions providing virtual education in Australia. The English text of the questionnaire and web site design have been sent to educational technology and virtual education specialists in the countries of the study group. With the feedback received, the spelling mistakes have been corrected and concept and language validity have been completed. The application of the questionnaire has taken 40 weeks during March-November 2004. Only 135 institutes have replied. Two of the questionnaires have been discharged because they included mistaken coding, names of the institutions and countries. Valid 133 questionnaires cover approximately 44% of the study group. Questionnaires saved in the online database have been transferred to Microsoft Excel and then to SPSS by external database connection. In regards of the research objectives, the data collected has been analyzed on computer and by using SPSS statistics package program. In data analysis frequency (f, percentage (% and arithmetic mean ( have been used. In comparisons of country, institute, year, and other variables, che-square test, independent t

  4. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  5. Analysis of pipe mitred bends using beam models - by finite element method

    International Nuclear Information System (INIS)

    Salles, A.C.S.L. de.

    1984-01-01

    The formulation of a recently proposed displacement based straight pipe element for the analysis of pipe mitred bends is summarized in this work. The element kinematics includes axial, bending, torsional and ovalisation displacements, all varying cubically along the axis of the element. Interaction effects between angle adjoined straight pipe section are modeled including the appropriate additional strain terms in the stiffness matrix formulation and by using a penalty procedure to enforce continuity of pipe skin flexural rotations at the common helical edge. The element model capabilities are ilustrated in some sample analysis and the results are compared with other available experimental, analytical or more complex numerical models. (Author) [pt

  6. Comparative secretome analysis of rat stomach under different nutritional status.

    Science.gov (United States)

    Senin, Lucia L; Roca-Rivada, Arturo; Castelao, Cecilia; Alonso, Jana; Folgueira, Cintia; Casanueva, Felipe F; Pardo, Maria; Seoane, Luisa M

    2015-02-26

    Obesity is a major public health threat for many industrialised countries. Bariatric surgery is the most effective treatment against obesity, suggesting that gut derived signals are crucial for energy balance regulation. Several descriptive studies have proven the presence of gastric endogenous systems that modulate energy homeostasis; however, these systems and the interactions between them are still not well known. In the present study, we show for the first time the comparative 2-DE gastric secretome analysis under different nutritional status. We have identified 38 differently secreted proteins by comparing stomach secretomes from tissue explant cultures of rats under feeding, fasting and re-feeding conditions. Among the proteins identified, glyceraldehyde-3-phosphate dehydrogenase was found to be more abundant in gastric secretome and plasma after re-feeding, and downregulated in obesity. Additionally, two calponin-1 species were decreased in feeding state, and other were modulated by nutritional and metabolic conditions. These and other secreted proteins identified in this work may be considered as potential gastrokines implicated in food intake regulation. The present work has an important impact in the field of obesity, especially in the regulation of body weight maintenance by the stomach. Nowadays, the most effective treatment in the fight against obesity is bariatric surgery, which suggests that stomach derived signals might be crucial for the regulation of the energy homeostasis. However, until now, the knowledge about the gastrokines and its mechanism of action has been poorly elucidated. In the present work, we had updated a previously validated explant secretion model for proteomic studies; this analysis allowed us, for the first time, to study the gastric secretome without interferences from other organs. We had identified 38 differently secreted proteins comparing ex vivo cultured stomachs from rats under feeding, fasting and re-feeding regimes

  7. Selected examples of practical approaches for the assessment of model reliability - parameter uncertainty analysis

    International Nuclear Information System (INIS)

    Hofer, E.; Hoffman, F.O.

    1987-02-01

    The uncertainty analysis of model predictions has to discriminate between two fundamentally different types of uncertainty. The presence of stochastic variability (Type 1 uncertainty) necessitates the use of a probabilistic model instead of the much simpler deterministic one. Lack of knowledge (Type 2 uncertainty), however, applies to deterministic as well as to probabilistic model predictions and often dominates over uncertainties of Type 1. The term ''probability'' is interpreted differently in the probabilistic analysis of either type of uncertainty. After these discriminations have been explained the discussion centers on the propagation of parameter uncertainties through the model, the derivation of quantitative uncertainty statements for model predictions and the presentation and interpretation of the results of a Type 2 uncertainty analysis. Various alternative approaches are compared for a very simple deterministic model

  8. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  9. COGNAT: a web server for comparative analysis of genomic neighborhoods.

    Science.gov (United States)

    Klimchuk, Olesya I; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Dibrova, Daria V; Mulkidjanian, Armen Y

    2017-11-22

    In prokaryotic genomes, functionally coupled genes can be organized in conserved gene clusters enabling their coordinated regulation. Such clusters could contain one or several operons, which are groups of co-transcribed genes. Those genes that evolved from a common ancestral gene by speciation (i.e. orthologs) are expected to have similar genomic neighborhoods in different organisms, whereas those copies of the gene that are responsible for dissimilar functions (i.e. paralogs) could be found in dissimilar genomic contexts. Comparative analysis of genomic neighborhoods facilitates the prediction of co-regulated genes and helps to discern different functions in large protein families. We intended, building on the attribution of gene sequences to the clusters of orthologous groups of proteins (COGs), to provide a method for visualization and comparative analysis of genomic neighborhoods of evolutionary related genes, as well as a respective web server. Here we introduce the COmparative Gene Neighborhoods Analysis Tool (COGNAT), a web server for comparative analysis of genomic neighborhoods. The tool is based on the COG database, as well as the Pfam protein families database. As an example, we show the utility of COGNAT in identifying a new type of membrane protein complex that is formed by paralog(s) of one of the membrane subunits of the NADH:quinone oxidoreductase of type 1 (COG1009) and a cytoplasmic protein of unknown function (COG3002). This article was reviewed by Drs. Igor Zhulin, Uri Gophna and Igor Rogozin.

  10. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  11. Analysis of Whole-Sky Imager Data to Determine the Validity of PCFLOS models

    Science.gov (United States)

    1992-12-01

    included in the data sample. 2-5 3.1. Data arrangement for a r x c contingency table ....................... 3-2 3.2. ARIMA models estimated for each...satellites. This model uses the multidimen- sional Boehm Sawtooth Wave Model to establish climatic probabilities through repetitive simula- tions of...analysis techniques to develop an ARIMAe model for each direction at the Columbia and Kirtland sites. Then, the models can be compared and analyzed to

  12. Arms control verification costs: the need for a comparative analysis

    International Nuclear Information System (INIS)

    MacLean, G.; Fergusson, J.

    1998-01-01

    The end of the Cold War era has presented practitioners and analysts of international non-proliferation, arms control and disarmament (NACD) the opportunity to focus more intently on the range and scope of NACD treaties and their verification. Aside from obvious favorable and well-publicized developments in the field of nuclear non-proliferation, progress also has been made in a wide variety of arenas, ranging from chemical and biological weapons, fissile material, conventional forces, ballistic missiles, to anti-personnel landmines. Indeed, breaking from the constraints imposed by the Cold War United States-Soviet adversarial zero-sum relationship that impeded the progress of arms control, particularly on a multilateral level, the post Cold War period has witnessed significant developments in NACD commitments, initiatives, and implementation. The goals of this project - in its final iteration - will be fourfold. First, it will lead to the creation of a costing analysis model adjustable for uses in several current and future arms control verification tasks. Second, the project will identify data accumulated in the cost categories outlined in Table 1 in each of the five cases. By comparing costs to overall effectiveness, the application of the model will demonstrate desirability in each of the cases (see Chart 1). Third, the project will identify and scrutinize 'political costs' as well as real expenditures and investment in the verification regimes (see Chart 2). And, finally, the project will offer some analysis on the relationship between national and multilateral forms of arms control verification, as well as the applicability of multilateralism as an effective tool in the verification of international non-proliferation, arms control, and disarmament agreements. (author)

  13. A Review and Comparative Analysis of Security Risks and Safety Measures of Mobile Health Apps

    Directory of Open Access Journals (Sweden)

    Karen Scott

    2015-11-01

    Full Text Available In line with a patient-centred model of healthcare, Mobile Health applications (mhealth apps provide convenient and equitable access to health and well-being resources and programs that can enable consumers to monitor their health related problems, understand specific medical conditions and attain personal fitness goals. This increase in access and control comes with an increase in risk and responsibility to identify and manage the associated risks, such as the privacy and security of consumers’ personal and health information. Based on a review of the literature, this paper identifies a set of risk and safety features for evaluating mHealth apps and uses those features to conduct a comparative analysis of the 20 most popular mHealth apps. The comparative analysis reveals that current mHealth apps do pose a risk to consumers. To address the safety and privacy concerns, recommendations to consumers and app developers are offered together with consideration of mHealth app future trends.

  14. Comparative analysis of national and regional models of the silver economy in the European Union

    Directory of Open Access Journals (Sweden)

    Andrzej Klimczuk

    2016-08-01

    Full Text Available The approach to analysing population ageing and its impacts on the economy has evolved in recent years. There is increasing interest in the development and use of products and services related to gerontechnology as well as other social innovations that may be considered as central parts of the "silver economy." However, the concept of silver economy is still being formed and requires detailed research. This article proposes a typology of models of the silver economy in the European Union (EU at the national and regional levels. This typology was created by comparing the Active Ageing Index to the typology of varieties and cultures of capitalism and typology of the welfare states. Practical recommendations for institutions of the EU and directions for further research are discussed.

  15. Comparative risk analysis for the Rocky Flats Plant integrated project planning

    International Nuclear Information System (INIS)

    Jones, M.E.; Shain, D.I.

    1994-01-01

    The Rocky Flats Plant is developing, with active stakeholder a comprehensive planning strategy that will support transition of the Rocky Flats Plant from a nuclear weapons production facility to site cleanup and final disposition. Final disposition of the Rocky Flats Plant materials and contaminants requires consideration of the interrelated nature of sitewide problems, such as material movement and disposition, facility and land use endstates, costs relative risks to workers and the public, and waste disposition. Comparative Risk Analysis employs both incremental risk and cumulative risk evaluations to compare risks from postulated options or endstates. These postulated options or endstates can be various remedial alternatives, or future endstate uses of federal agency land. Currently, there does not exist any approved methodology that aggregates various incremental risk estimates. Comparative Risk Analysis has been developed to aggregate various incremental risk estimates to develop a site cumulative risk estimate. This paper discusses development of the Comparative Risk Analysis methodology, stakeholder participation and lessons learned from these challenges

  16. Comparation of instrumental and sensory methods in fermented milk beverages texture quality analysis

    Directory of Open Access Journals (Sweden)

    Jovica Hardi

    2001-04-01

    Full Text Available The texture of the curd of fermented dairy products is one of the primary factors of their overall quality. The flow properties of fermented dairy products have characteristic of thixotropic (pseudoplastic type of liquids. At the same time, these products are viscoelastic systems, i.e. they are capable of texture renewal after applied deformation. Complex analysis of some of the properties is essentional for the system description . The aim of the present work was to completely describe the texture of fermented milk beverages . Three basic parameters were taken into consideration: structure, hardness (consistency and stability of the curd. The description model of these three parameters was applied on the basis of the experimental results obteined. Results obtained by present model were compared with the results of sensory analysis. Influence of milk fat content and skimmed milk powder addition on acidophilus milk texture quality was also examined using this model. It was shawn that, by using this model – on the basis of instrumental and sensory analyses, a complete and objective determination of texture quality of the fermented milk beverages can be obtained. High degree of correlation between instrumental and sensory results (r =0.8975 is obtained results of this work indicated that both factors (milk fat content and skimmed milk powder addition had an influence on texture quality. Samples with higher milk fat content had a better texture properties in comparsion with low fat content samples. Texture of all examined samples was improved by increasing skimmed milk powder content. Optimal amounts of skimmed milk powder addition with regard to milk fat content, in milk, is determined using the proposed model.

  17. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  18. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Science.gov (United States)

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  19. Comparative Analysis between Conventional PI and Fuzzy LogicPI Controllers for Indoor Benzene Concentrations

    Directory of Open Access Journals (Sweden)

    Nun Pitalúa-Díaz

    2015-05-01

    Full Text Available Exposure to hazardous concentrations of volatile organic compounds indoors in small workshops could affect the health of workers, resulting in respirative diseases, severe intoxication or even cancer. Controlling the concentration of volatile organic compounds is required to prevent harmful conditions for workers in indoor environments. In this document, PI and fuzzy PI controllers were used to reduce hazardous indoor air benzene concentrations in small workplaces. The workshop is represented by means of a well-mixed room model. From the knowledge obtained from the model, PI and fuzzy PI controllers were designed and their performances were compared. Both controllers were able to maintain the benzene concentration within secure levels for the workers. The fuzzy PI controller performed more efficiently than the PI controller. Both approaches could be expanded to control multiple extractor fans in order to reduce the air pollution in a shorter time. The results from the comparative analysis showed that implementing a fuzzy logic PI controller is promising for assuring indoor air quality in this kind of hazardous work environment.

  20. Comparing two models for post-wildfire debris flow susceptibility mapping

    Science.gov (United States)

    Cramer, J.; Bursik, M. I.; Legorreta Paulin, G.

    2017-12-01

    Traditionally, probabilistic post-fire debris flow susceptibility mapping has been performed based on the typical method of failure for debris flows/landslides, where slip occurs along a basal shear zone as a result of rainfall infiltration. Recent studies have argued that post-fire debris flows are fundamentally different in their method of initiation, which is not infiltration-driven, but surface runoff-driven. We test these competing models by comparing the accuracy of the susceptibility maps produced by each initiation method. Debris flow susceptibility maps are generated according to each initiation method for a mountainous region of Southern California that recently experienced wildfire and subsequent debris flows. A multiple logistic regression (MLR), which uses the occurrence of past debris flows and the values of environmental parameters, was used to determine the probability of future debris flow occurrence. The independent variables used in the MLR are dependent on the initiation method; for example, depth to slip plane, and shear strength of soil are relevant to the infiltration initiation, but not surface runoff. A post-fire debris flow inventory serves as the standard to compare the two susceptibility maps, and was generated by LiDAR analysis and field based ground-truthing. The amount of overlap between the true locations where debris flow erosion can be documented, and where the MLR predicts high probability of debris flow initiation was statistically quantified. The Figure of Merit in Space (FMS) was used to compare the two models, and the results of the FMS comparison suggest that surface runoff-driven initiation better explains debris flow occurrence. Wildfire can breed conditions that induce debris flows in areas that normally would not be prone to them. Because of this, nearby communities at risk may not be equipped to protect themselves against debris flows. In California, there are just a few months between wildland fire season and the wet

  1. Sensitivity analysis of an environmental model: an application of different analysis methods

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Saltelli, Andrea

    1997-01-01

    A parametric sensitivity analysis (SA) was conducted on a well known model for the production of a key sulphur bearing compound from algal biota. The model is of interest because of the climatic relevance of the gas (dimethylsulphide, DMS), an initiator of cloud particles. A screening test at low sample size is applied first (Morris method) followed by a computationally intensive variance based measure. Standardised regression coefficients are also computed. The various SA measures are compared with each other, and the use of bootstrap is suggested to extract empirical confidence bounds on the SA estimators. For some of the input factors, investigators guess about the parameters relevance was confirmed; for some others, the results shed new light on the system mechanism and on the data parametrisation

  2. Comparative study of chemo-electro-mechanical transport models for an electrically stimulated hydrogel

    International Nuclear Information System (INIS)

    Elshaer, S E; Moussa, W A

    2014-01-01

    The main objective of this work is to introduce a new expression for the hydrogel’s hydration for use within the Poisson Nernst–Planck chemo electro mechanical (PNP CEM) transport models. This new contribution to the models support large deformation by considering the higher order terms in the Green–Lagrangian strain tensor. A detailed discussion of the CEM transport models using Poisson Nernst–Planck (PNP) and Poisson logarithmic Nernst–Planck (PLNP) equations for chemically and electrically stimulated hydrogels will be presented. The assumptions made to simplify both CEM transport models for electric field application in the order of 0.833 kV m −1 and a highly diluted electrolyte solution (97% is water) will be explained. This PNP CEM model has been verified accurately against experimental and numerical results. In addition, different definitions for normalizing the parameters are used to derive the dimensionless forms of both the PNP and PLNP CEM. Four models, PNP CEM, PLNP CEM, dimensionless PNP CEM and dimensionless PNLP CEM transport models were employed on an axially symmetric cylindrical hydrogel problem with an aspect ratio (diameter to thickness) of 175:3. The displacement and osmotic pressure obtained for the four models are compared against the variation of the number of elements for finite element analysis, simulation duration and solution rate when using the direct numerical solver. (papers)

  3. Global sensitivity analysis applied to drying models for one or a population of granules

    DEFF Research Database (Denmark)

    Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer

    2014-01-01

    The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring sensitiv......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...... compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator...

  4. VALUING BENEFITS FROM WATER QUALITY IMPROVEMENTS USING KUHN TUCKER MODEL - A COMPARATIVE ANALYSIS ON UTILITY FUNCTIONAL FORMS-

    Science.gov (United States)

    Okuyama, Tadahiro

    Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.

  5. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    Science.gov (United States)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor 0.91, NSE>0.89, and 0.18analysis. Indeed, the uncertainty analysis must be accounted when the outcomes of the model use for policy or management decisions.

  6. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  7. A Comparative Analysis of Motivations for Occupational Choice or ...

    African Journals Online (AJOL)

    A Comparative Analysis of Motivations for Occupational Choice or Preference between ... The results showed that these factors (external influence, extrinsic ... are drawn, and recommendations made for career counselling of students.

  8. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  9. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  10. Neutronics comparative analysis between MNSR and slowpoke-II reactors

    International Nuclear Information System (INIS)

    Khamis, I.; Khattab, K.

    1999-01-01

    Neutronics analysis of both MNSR and Slowpoke reactors were made. Calculations including flux distribution, power estimation, excess and shutdown reactivity margins, flooding effects of irradiation sites, and initial investigation of fuel conversion from high to low enriched uranium were discussed. A neutronic 3-D model, dedicated mainly for the MNSR, has been developed to perform such neutronic calculations for both reactors. Well-known cell and core calculation codes such as WIMSD4 and CITATIONS have been used. It was found out that it is possible to lower the fuel enrichment of the Miniature Neutron Source Reactor (MNSR) to 20% using U O 2 as fuel instead of U Al 4 . The number of fuel elements required for the new core is 199. The use of double thickness of the bottom reflector in Slowpoke reactor made it possible to load the reactor with lower enriched fuel compared to MNSR. Values of reactivity flooding effects for single or combination of inner irradiation sites were obtained accurately. Results show good agreement with reported data for MNSR. (author)

  11. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  12. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    Directory of Open Access Journals (Sweden)

    Cherryl O Talaue

    Full Text Available The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  13. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    Science.gov (United States)

    Talaue, Cherryl O; del Rosario, Ricardo C H; Pfeiffer, Friedhelm; Mendoza, Eduardo R; Oesterhelt, Dieter

    2016-01-01

    The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  14. Models for dynamic analysis of backup ball bearings of an AMB-system

    Science.gov (United States)

    Halminen, Oskari; Aceituno, Javier F.; Escalona, José L.; Sopanen, Jussi; Mikkola, Aki

    2017-10-01

    Two detailed models of backup bearing are introduced for dynamic analysis of the dropdown event of a rotor supported by an active magnetic bearing (AMB). The proposed two-dimensional models of the backup bearings are based on a multibody approach. All parts of the bearing are modeled as rigid bodies with geometrical surfaces and the bodies interact with each other through contact forces. The first model describes a backup bearing without a cage, and the second model describes a backup bearing with a cage. The introduced models, which incorporate a realistic elastic contact model, are compared with previously presented simplified models through parametric study. In order to ensure the durability of backup bearings in challenging applications where ball bearings with an oversized bore are necessary, analysis of the forces affecting the bearing's cage and balls is required, and the models introduced in this work assist in this task as they enable optimal properties for the bearing's cage and balls to be found.

  15. COMPARATIVE MODELLING AND LIGAND BINDING SITE PREDICTION OF A FAMILY 43 GLYCOSIDE HYDROLASE FROM Clostridium thermocellum

    Directory of Open Access Journals (Sweden)

    Shadab Ahmed

    2012-06-01

    Full Text Available The phylogenetic analysis of Clostridium thermocellum family 43 glycoside hydrolase (CtGH43 showed close evolutionary relation with carbohydrate binding family 6 proteins from C. cellulolyticum, C. papyrosolvens, C. cellulyticum, and A. cellulyticum. Comparative modeling of CtGH43 was performed based on crystal structures with PDB IDs 3C7F, 1YIF, 1YRZ, 2EXH and 1WL7. The structure having lowest MODELLER objective function was selected. The three-dimensional structure revealed typical 5-fold beta–propeller architecture. Energy minimization and validation of predicted model with VERIFY 3D indicated acceptability of the proposed atomic structure. The Ramachandran plot analysis by RAMPAGE confirmed that family 43 glycoside hydrolase (CtGH43 contains little or negligible segments of helices. It also showed that out of 301 residues, 267 (89.3% were in most favoured region, 23 (7.7% were in allowed region and 9 (3.0% were in outlier region. IUPred analysis of CtGH43 showed no disordered region. Active site analysis showed presence of two Asp and one Glu, assumed to form a catalytic triad. This study gives us information about three-dimensional structure and reaffirms the fact that it has the similar core 5-fold beta–propeller architecture and so probably has the same inverting mechanism of action with the formation of above mentioned catalytic triad for catalysis of polysaccharides.

  16. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  17. Comparative analysis of different approaches to the computation of long-wave radiation balance of water air systems

    International Nuclear Information System (INIS)

    Zhukovskii, K.; Nourani, Y.; Monte, L.

    1999-01-01

    In the present paper, the net long-wave radiation balance of the water-air environmental systems is analysed on the base of several semi-empirical approaches. Various theoretical models of infrared atmospheric radiation are reviewed. Factors, affecting their behavior are considered. Special attention is paid to physical conditions under which those models are applicable. Atmospheric and net infrared radiation fluxes are computed and compared under clear and cloudy sky. Results are presented in graphical form. Conclusions are made on the applicability of models considered for evaluating infrared radiation fluxes in environmental conditions of Central Italy. On the base of present analysis Anderson's model is chosen for future calculations of heat budget of lakes in Central Italy [it

  18. Comparative floorplan-analysis as a means to develop design guidelines

    NARCIS (Netherlands)

    Van Hoogdalem, H.; van der Voordt, D.J.M.; van Wegen, H.B.R.

    1985-01-01

    This study explores the usefulness of comparative floorplan-analysis for the development of spatio-organizational concepts in architectural design processes. Each floorplan can be considered as a reflection of the goals and activities of the users as interpreted by the architect. By comparing a wide

  19. Comparative study of standard space and real space analysis of quantitative MR brain data.

    Science.gov (United States)

    Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M

    2011-06-01

    To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.

  20. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    Science.gov (United States)

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  1. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Comparative analysis of 60Co intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Fox, Christopher; Romeijn, H Edwin; Lynch, Bart; Dempsey, James F; Men, Chunhua; Aleman, Dionne M

    2008-01-01

    In this study, we perform a scientific comparative analysis of using 60 Co beams in intensity-modulated radiation therapy (IMRT). In particular, we evaluate the treatment plan quality obtained with (i) 6 MV, 18 MV and 60 Co IMRT; (ii) different numbers of static multileaf collimator (MLC) delivered 60 Co beams and (iii) a helical tomotherapy 60 Co beam geometry. We employ a convex fluence map optimization (FMO) model, which allows for the comparison of plan quality between different beam energies and configurations for a given case. A total of 25 clinical patient cases that each contain volumetric CT studies, primary and secondary delineated targets, and contoured structures were studied: 5 head-and-neck (H and N), 5 prostate, 5 central nervous system (CNS), 5 breast and 5 lung cases. The DICOM plan data were anonymized and exported to the University of Florida optimized radiation therapy (UFORT) treatment planning system. The FMO problem was solved for each case for 5-71 equidistant beams as well as a helical geometry for H and N, prostate, CNS and lung cases, and for 3-7 equidistant beams in the upper hemisphere for breast cases, all with 6 MV, 18 MV and 60 Co dose models. In all cases, 95% of the target volumes received at least the prescribed dose with clinical sparing criteria for critical organs being met for all structures that were not wholly or partially contained within the target volume. Improvements in critical organ sparing were found with an increasing number of equidistant 60 Co beams, yet were marginal above 9 beams for H and N, prostate, CNS and lung. Breast cases produced similar plans for 3-7 beams. A helical 60 Co beam geometry achieved similar plan quality as static plans with 11 equidistant 60 Co beams. Furthermore, 18 MV plans were initially found not to provide the same target coverage as 6 MV and 60 Co plans; however, adjusting the trade-offs in the optimization model allowed equivalent target coverage for 18 MV. For plans with comparable

  3. Comparing live and remote models in eating conformity research.

    Science.gov (United States)

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  4. A cost-effectiveness analysis of celecoxib compared with diclofenac in the treatment of pain in osteoarthritis (OA) within the Swedish health system using an adaptation of the NICE OA model.

    Science.gov (United States)

    Brereton, Nicholas; Pennington, Becky; Ekelund, Mats; Akehurst, Ronald

    2014-09-01

    Celecoxib for the treatment of pain resulting from osteoarthritis (OA) was reviewed by the Tandvårds- och läkemedelsförmånsverket-Dental and Pharmaceutical Benefits Board (TLV) in Sweden in late 2010. This study aimed to evaluate the incremental cost-effectiveness ratio (ICER) of celecoxib plus a proton pump inhibitor (PPI) compared to diclofenac plus a PPI in a Swedish setting. The National Institute for Health and Care Excellence (NICE) in the UK developed a health economic model as part of their 2008 assessment of treatments for OA. In this analysis, the model was reconstructed and adapted to a Swedish perspective. Drug costs were updated using the TLV database. Adverse event costs were calculated using the regional price list of Southern Sweden and the standard treatment guidelines from the county council of Stockholm. Costs for treating cardiovascular (CV) events were taken from the Swedish DRG codes and the literature. Over a patient's lifetime treatment with celecoxib plus a PPI was associated with a quality-adjusted life year (QALY) gain of 0.006 per patient when compared to diclofenac plus a PPI. There was an increase in discounted costs of 529 kr per patient, which resulted in an incremental cost-effectiveness ratio (ICER) of 82,313 kr ($12,141). Sensitivity analysis showed that treatment was more cost effective in patients with an increased risk of bleeding or gastrointestinal (GI) complications. The results suggest that celecoxib plus a PPI is a cost effective treatment for OA when compared to diclofenac plus a PPI. Treatment is shown to be more cost effective in Sweden for patients with a high risk of bleeding or GI complications. It was in this population that the TLV gave a positive recommendation. There are known limitations on efficacy in the original NICE model.

  5. The modeling and analysis of the word-of-mouth marketing

    Science.gov (United States)

    Li, Pengdeng; Yang, Xiaofan; Yang, Lu-Xing; Xiong, Qingyu; Wu, Yingbo; Tang, Yuan Yan

    2018-03-01

    As compared to the traditional advertising, word-of-mouth (WOM) communications have striking advantages such as significantly lower cost and much faster propagation, and this is especially the case with the popularity of online social networks. This paper focuses on the modeling and analysis of the WOM marketing. A dynamic model, known as the SIPNS model, capturing the WOM marketing processes with both positive and negative comments is established. On this basis, a measure of the overall profit of a WOM marketing campaign is proposed. The SIPNS model is shown to admit a unique equilibrium, and the equilibrium is determined. The impact of different factors on the equilibrium of the SIPNS model is illuminated through theoretical analysis. Extensive experimental results suggest that the equilibrium is much likely to be globally attracting. Finally, the influence of different factors on the expected overall profit of a WOM marketing campaign is ascertained both theoretically and experimentally. Thereby, some promotion strategies are recommended. To our knowledge, this is the first time the WOM marketing is treated in this way.

  6. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients - A lifetime analysis.

    Science.gov (United States)

    Voigt, Jeffrey; Carpenter, Linda; Leuchter, Andrew

    2017-01-01

    Repetitive Transcranial Magnetic Stimulation (rTMS) commonly is used for the treatment of Major Depressive Disorder (MDD) after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients' lifetime. We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs) of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20-59) who had failed to benefit from one pharmacotherapy trial. Patients' life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs), Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%. Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes) in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients) to $11,140/0.43 (younger patients). One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year. rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD.

  7. Cost effectiveness analysis comparing repetitive transcranial magnetic stimulation to antidepressant medications after a first treatment failure for major depressive disorder in newly diagnosed patients - A lifetime analysis.

    Directory of Open Access Journals (Sweden)

    Jeffrey Voigt

    Full Text Available Repetitive Transcranial Magnetic Stimulation (rTMS commonly is used for the treatment of Major Depressive Disorder (MDD after patients have failed to benefit from trials of multiple antidepressant medications. No analysis to date has examined the cost-effectiveness of rTMS used earlier in the course of treatment and over a patients' lifetime.We used lifetime Markov simulation modeling to compare the direct costs and quality adjusted life years (QALYs of rTMS and medication therapy in patients with newly diagnosed MDD (ages 20-59 who had failed to benefit from one pharmacotherapy trial. Patients' life expectancies, rates of response and remission, and quality of life outcomes were derived from the literature, and treatment costs were based upon published Medicare reimbursement data. Baseline costs, aggregate per year quality of life assessments (QALYs, Monte Carlo simulation, tornado analysis, assessment of dominance, and one way sensitivity analysis were also performed. The discount rate applied was 3%.Lifetime direct treatment costs, and QALYs identified rTMS as the dominant therapy compared to antidepressant medications (i.e., lower costs with better outcomes in all age ranges, with costs/improved QALYs ranging from $2,952/0.32 (older patients to $11,140/0.43 (younger patients. One-way sensitivity analysis demonstrated that the model was most sensitive to the input variables of cost per rTMS session, monthly prescription drug cost, and the number of rTMS sessions per year.rTMS was identified as the dominant therapy compared to antidepressant medication trials over the life of the patient across the lifespan of adults with MDD, given current costs of treatment. These models support the use of rTMS after a single failed antidepressant medication trial versus further attempts at medication treatment in adults with MDD.

  8. Interactions of cisplatin analogues with lysozyme: a comparative analysis.

    Science.gov (United States)

    Ferraro, Giarita; De Benedictis, Ilaria; Malfitano, Annamaria; Morelli, Giancarlo; Novellino, Ettore; Marasco, Daniela

    2017-10-01

    The biophysical characterization of drug binding to proteins plays a key role in structural biology and in the discovery and optimization of drug discovery processes. The search for optimal combinations of biophysical techniques that can correctly and efficiently identify and quantify binding of metal-based drugs to their final target is challenging, due to the physicochemical properties of these agents. Different cisplatin derivatives have shown different citotoxicities in most common cancer lines, suggesting that they exert their biological activity via different mechanisms of action. Here we carried out a comparative analysis, by studying the behaviours of three Pt-compounds under the same experimental conditions and binding assays to properly deepen the determinants of the different MAOs. Indeed we compared the results obtained using surface plasmon resonance, isothermal titration calorimetry, fluorescence spectroscopy and thermal shift assays based on circular dichroism experiments in the characterization of the formation of adducts obtained upon reaction of cisplatin, carboplatin and iodinated analogue of cisplatin, cis-Pt (NH 3 ) 2 I 2 , with the model protein hen egg white lysozyme, both at neutral and acid pHs. Further we reasoned on the applicability of employed techniques for the study the thermodynamics and kinetics of the reaction of a metallodrug with a protein and to reveal which information can be obtained using a combination of these analyses. Data were discussed on the light of the existing structural data collected on the platinated protein.

  9. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  10. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  11. A Comparative Analysis of Fuzzy Inference Engines in Context of ...

    African Journals Online (AJOL)

    Fuzzy inference engine has found successful applications in a wide variety of fields, such as automatic control, data classification, decision analysis, expert engines, time series prediction, robotics, pattern recognition, etc. This paper presents a comparative analysis of three fuzzy inference engines, max-product, max-min ...

  12. Mind and consciousness in yoga – Vedanta: A comparative analysis with western psychological concepts

    Science.gov (United States)

    Prabhu, H. R. Aravinda; Bhat, P. S.

    2013-01-01

    Study of mind and consciousness through established scientific methods is often difficult due to the observed-observer dichotomy. Cartesian approach of dualism considering the mind and matter as two diverse and unconnected entities has been questioned by oriental schools of Yoga and Vedanta as well as the recent quantum theories of modern physics. Freudian and Neo-freudian schools based on the Cartesian model have been criticized by the humanistic schools which come much closer to the vedantic approach of unitariness. A comparative analysis of the two approaches is discussed. PMID:23858252

  13. Comparative Time Series Analysis of Aerosol Optical Depth over Sites in United States and China Using ARIMA Modeling

    Science.gov (United States)

    Li, X.; Zhang, C.; Li, W.

    2017-12-01

    Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.

  14. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-05-20

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models

    Science.gov (United States)

    Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.

    2016-12-01

    Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of

  16. Hypersonic - Model Analysis as a Service

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald

    2014-01-01

    Hypersonic is a Cloud-based tool that proposes a new approach to the deployment of model analysis facilities. It is implemented as a RESTful Web service API o_ering analysis features such as model clone detection. This approach allows the migration of resource intensive analysis algorithms from...

  17. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    Science.gov (United States)

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  18. Comparative analysis of wholesale and retail frozen fish marketing ...

    African Journals Online (AJOL)

    Comparative analysis of wholesale and retail frozen fish marketing in Port Harcourt Metropolis, Rivers State, Nigeria. ... from each market giving 30 retail marketers and 30 wholesale marketers. ... EMAIL FULL TEXT EMAIL FULL TEXT

  19. Comparative analysis of crayfish marketing in selected markets of ...

    African Journals Online (AJOL)

    Comparative analysis of crayfish marketing in selected markets of Akwa Ibom and Abia States, Nigeria. ... It specifically looked at market integration, costs and return, marketing margin, marketing ... EMAIL FULL TEXT EMAIL FULL TEXT

  20. Comparative Reannotation of 21 Aspergillus Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Salamov, Asaf; Riley, Robert; Kuo, Alan; Grigoriev, Igor

    2013-03-08

    We used comparative gene modeling to reannotate 21 Aspergillus genomes. Initial automatic annotation of individual genomes may contain some errors of different nature, e.g. missing genes, incorrect exon-intron structures, 'chimeras', which fuse 2 or more real genes or alternatively splitting some real genes into 2 or more models. The main premise behind the comparative modeling approach is that for closely related genomes most orthologous families have the same conserved gene structure. The algorithm maps all gene models predicted in each individual Aspergillus genome to the other genomes and, for each locus, selects from potentially many competing models, the one which most closely resembles the orthologous genes from other genomes. This procedure is iterated until no further change in gene models is observed. For Aspergillus genomes we predicted in total 4503 new gene models ( ~;;2percent per genome), supported by comparative analysis, additionally correcting ~;;18percent of old gene models. This resulted in a total of 4065 more genes with annotated PFAM domains (~;;3percent increase per genome). Analysis of a few genomes with EST/transcriptomics data shows that the new annotation sets also have a higher number of EST-supported splice sites at exon-intron boundaries.

  1. Linear support vector regression and partial least squares chemometric models for determination of Hydrochlorothiazide and Benazepril hydrochloride in presence of related impurities: A comparative study

    Science.gov (United States)

    Naguib, Ibrahim A.; Abdelaleem, Eglal A.; Draz, Mohammed E.; Zaazaa, Hala E.

    2014-09-01

    Partial least squares regression (PLSR) and support vector regression (SVR) are two popular chemometric models that are being subjected to a comparative study in the presented work. The comparison shows their characteristics via applying them to analyze Hydrochlorothiazide (HCZ) and Benazepril hydrochloride (BZ) in presence of HCZ impurities; Chlorothiazide (CT) and Salamide (DSA) as a case study. The analysis results prove to be valid for analysis of the two active ingredients in raw materials and pharmaceutical dosage form through handling UV spectral data in range (220-350 nm). For proper analysis a 4 factor 4 level experimental design was established resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of 8 mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze HCZ and BZ in presence of HCZ impurities CT and DSA with high selectivity and accuracy of mean percentage recoveries of (101.01 ± 0.80) and (100.01 ± 0.87) for HCZ and BZ respectively using PLSR model and of (99.78 ± 0.80) and (99.85 ± 1.08) for HCZ and BZ respectively using SVR model. The analysis results of the dosage form were statistically compared to the reference HPLC method with no significant differences regarding accuracy and precision. SVR model gives more accurate results compared to PLSR model and show high generalization ability, however, PLSR still keeps the advantage of being fast to optimize and implement.

  2. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions.

    Science.gov (United States)

    Haney, Nora M; Nguyen, Hoang M T; Honda, Matthew; Abdel-Mageed, Asim B; Hellstrom, Wayne J G

    2018-04-01

    It is common for men to develop erectile dysfunction after radical prostatectomy. The anatomy of the rat allows the cavernous nerve (CN) to be identified, dissected, and injured in a controlled fashion. Therefore, bilateral CN injury (BCNI) in the rat model is routinely used to study post-prostatectomy erectile dysfunction. To compare and contrast the available literature on pharmacologic intervention after BCNI in the rat. A literature search was performed on PubMed for cavernous nerve and injury and erectile dysfunction and rat. Only articles with BCNI and pharmacologic intervention that could be grouped into categories of immune modulation, growth factor therapy, receptor kinase inhibition, phosphodiesterase type 5 inhibition, and anti-inflammatory and antifibrotic interventions were included. To assess outcomes of pharmaceutical intervention on erectile function recovery after BCNI in the rat model. The ratio of maximum intracavernous pressure to mean arterial pressure was the main outcome measure chosen for this analysis. All interventions improved erectile function recovery after BCNI based on the ratio of maximum intracavernous pressure to mean arterial pressure results. Additional end-point analysis examined the corpus cavernosa and/or the major pelvic ganglion and CN. There was extreme heterogeneity within the literature, making accurate comparisons between crush injury and therapeutic interventions difficult. BCNI in the rat is the accepted animal model used to study nerve-sparing post-prostatectomy erectile dysfunction. However, an important limitation is extreme variability. Efforts should be made to decrease this variability and increase the translational utility toward clinical trials in humans. Haney NM, Nguyen HMT, Honda M, et al. Bilateral Cavernous Nerve Crush Injury in the Rat Model: A Comparative Review of Pharmacologic Interventions. Sex Med Rev 2018;6:234-241. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier

  3. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  4. A Comprehensive Method for Comparing Mental Models of Dynamic Systems

    OpenAIRE

    Schaffernicht, Martin; Grösser, Stefan N.

    2011-01-01

    Mental models are the basis on which managers make decisions even though external decision support systems may provide help. Research has demonstrated that more comprehensive and dynamic mental models seem to be at the foundation for improved policies and decisions. Eliciting and comparing such models can systematically explicate key variables and their main underlying structures. In addition, superior dynamic mental models can be identified. This paper reviews existing studies which measure ...

  5. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  7. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  8. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  9. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  10. Comparative analysis of a LOCA for a German PWR with ASTEC and ATHLET-CD

    International Nuclear Information System (INIS)

    Reinke, N.; Chan, H.W.; Sonnenkalb, M.

    2013-01-01

    This paper presents the results of a comparative analysis performed with ASTEC V2.02 and a coupled ATHLET-CD V2.2c /COCOSYS V2.4 calculation for a German 1300 MWe KONVOI type PWR. The purpose of this analysis is mainly to assess the ASTEC code behaviour in modelling of both the thermal-hydraulic phenomena in the coolant circuit arising during a hypothetical severe accident and the early phase of the core degradation versus the more mechanistic code system ATHLET-CD/COCOSYS. The performed analyses cover a loss of coolant accident sequence (LOCA). Such comparison has been done for the first time. The integral code ASTEC (Accident Source Term Evaluation Code) commonly developed since 1996 by IRSN and GRS is a fast running programme, which allows the calculation of entire sequences of severe accidents (SA) in light water reactors from the initiating event up to the release of fission products into the environment, thereby covering all important in-vessel and containment phenomena. The thermal-hydraulic mechanistic system code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by GRS for the analysis of the whole spectrum of leaks and transients in PWRs and BWRs. For modeling of core degradation processes the CD part (Core Degradation) of ATHLET can be activated. For analyses of the containment behavior, ATHLET-CD has been coupled to the GRS code COCOSYS (COntainment COde SYStem). (orig.)

  11. Intra-arterial therapy of neuroendocrine tumour liver metastases: comparing conventional TACE, drug-eluting beads TACE and yttrium-90 radioembolisation as treatment options using a propensity score analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Minh, Duc Do; Gorodetski, Boris; Smolka, Susanne; Savic, Lynn Jeanette; Wainstejn, David [Charite Universitaetsmedizin, Campus Virchow Klinikum, Department of Diagnostic and Interventional Radiology, Berlin (Germany); Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Chapiro, Julius; Schlachter, Todd [Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Huang, Qiang [Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Capital Medical University, Department of Interventional Radiology, Beijing Chaoyang Hospital, Beijing (China); Liu, Cuihong [Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Shandong Provincial Hospital Affiliated to Shandong University, The Ultrasound Department, Jinan (China); Lin, MingDe [Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States); Philips Research North America, U/S Imaging and Interventions (UII), Cambridge, MA (United States); Gebauer, Bernhard [Charite Universitaetsmedizin, Campus Virchow Klinikum, Department of Diagnostic and Interventional Radiology, Berlin (Germany); Geschwind, Jean-Francois [Yale University School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, CT (United States)

    2017-12-15

    To compare efficacy, survival outcome and prognostic factors of conventional transarterial chemoembolisation (cTACE), drug-eluting beads TACE (DEB-TACE) and yttrium-90 radioembolisation (Y90) for the treatment of liver metastases from gastroenteropancreatic (GEP) neuroendocrine tumours (NELM). This retrospective analysis included 192 patients (58.6 years mean age, 56% men) with NELM treated with cTACE (N = 122), DEB-TACE (N = 26) or Y90 (N = 44) between 2000 and 2014. Radiologic response to therapy was assessed according to Response Evaluation Criteria in Solid Tumours (RECIST) and World Health Organization (WHO) criteria using periprocedural MR imaging. Survival analysis included propensity score analysis (PSA), median overall survival (MOS), hepatic progression-free survival, Kaplan-Meier using log-rank test and the uni- and multivariate Cox proportional hazards model (MVA). MOS of the entire study population was 28.8 months. As for cTACE, DEB-TACE and Y90, MOS was 33.8 months, 21.7 months and 23.6 months, respectively. According to the MVA, cTACE demonstrated a significantly longer MOS as compared to DEB-TACE (p <.01) or Y90 (p =.02). The 5-year survival rate after initial cTACE, DEB-TACE and Y90 was 28.2%, 10.3% and 18.5%, respectively. Upon PSA, our study suggests significant survival benefits for patients treated with cTACE as compared to DEB-TACE and Y90. This data supports the therapeutic decision for cTACE as the primary intra-arterial therapy option in patients with unresectable NELM until proven otherwise. (orig.)

  12. An experimental-numerical method for comparative analysis of joint prosthesis; Un metodo numerico-experimental para el analisis comparativo de protesis articulares

    Energy Technology Data Exchange (ETDEWEB)

    Claramunt, R.; Rincon, E.; Zubizarreta, V.; Ros, A.

    2001-07-01

    The difficulty that exists in the analysis of mechanical stresses in bones is high due to its complex mechanical and morphological characteristics. This complexity makes generalists modelling and conclusions derived from prototype tests very questionable. In this article a relatively simple comparative analysis systematic method that allow us to establish some behaviour differences in different kind of prosthesis is presented. The method, applicable in principle to any joint problem, is based on analysing perturbations produced in natural stress states of a bone after insertion of a joint prosthesis and combines numerical analysis using a 3-D finite element model and experimental studies based on photoelastic coating and electric extensometry. The experimental method is applied to compare two total hip prosthesis cement-free femoral stems of different philosophy. One anatomic of new generation, being of oblique setting over cancellous bone and the other madreporique of trochantero-diaphyseal support over cortical bone. (Author) 4 refs.

  13. Comparative Analysis of Disabled Accessibility Needs of Heritage Building in Perak

    Directory of Open Access Journals (Sweden)

    Zahari Nurul Fadzila

    2016-01-01

    Full Text Available Tourism sector was the sixth highest national income provider to the Malaysian economy in 2014. In order to replenish Malaysian economy through tourism, the Malaysian government has to diversify the present tourism products and offers a wide variety of tourism packages. This has mentioned in the National Key Results Area (NKRA development platform highlighted in the 10th Malaysian Plan. Therefore, the tourism sector needs to continuously re-engineer and adapt its business model to suit every customer’s needs and demands, including disabled people. At the moment, one of the highest tourist attraction contributors in Malaysia is the heritage building sector. The heritage building sector in Malaysia becomes popular due to its diverse historical background and culture. It attracts local and international tourists to visit. However, the lack of facilities provided especially for the disable people has hindered its future prospects to become globally popular. The national heritage should be viewed, explored and enjoyed by everybody without discriminating anyone. Insufficient of provision for disable facilities in heritage act has caused barrier to the disable people to enjoy and visit the heritage sites. The objective of this research is to analyze the comparative data that been retrieved in the field of selected case study. It will be carried out by visiting the selected case study, observation and documentary analysis. This research aims to do a comparative analysis of Disabled Accessibility needs of Heritage Building in Perak. The findings of this research will alert the needs of disabled in visiting the heritage building and documented for future research.

  14. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  15. Advanced spatial metrics analysis in cellular automata land use and cover change modeling

    International Nuclear Information System (INIS)

    Zamyatin, Alexander; Cabral, Pedro

    2011-01-01

    This paper proposes an approach for a more effective definition of cellular automata transition rules for landscape change modeling using an advanced spatial metrics analysis. This approach considers a four-stage methodology based on: (i) the search for the appropriate spatial metrics with minimal correlations; (ii) the selection of the appropriate neighborhood size; (iii) the selection of the appropriate technique for spatial metrics application; and (iv) the analysis of the contribution level of each spatial metric for joint use. The case study uses an initial set of 7 spatial metrics of which 4 are selected for modeling. Results show a better model performance when compared to modeling without any spatial metrics or with the initial set of 7 metrics.

  16. Comparative thermodynamic analysis of the Pb-Au0.7Sn0.3 section in the Pb-Au-Sn ternary system

    International Nuclear Information System (INIS)

    Trumic, B.; Zivkovic, D.; Zivkovic, Z.; Manasijevic, D.

    2005-01-01

    The results of comparative thermodynamic analysis of Pb-Au 0.7 Sn 0.3 section in Pb-Au-Sn system are presented in this paper. Investigation was done comparatively by calorimetric measurements and thermodynamic calculation according to the general solution model. Thermodynamic parameters, such as partial and integral molar quantities, were determined at different temperatures. The comparison between experimental and calculated results showed mutual agreement. Demixing tendency of lead, presented in the positive deviation from ideal behavior, was confirmed through the study of concentration fluctuation in the long-wavelength limit. Also, chosen alloys in the investigated section were characterized using SEM-EDX analysis

  17. Comparative Study of Elastic Network Model and Protein Contact Network for Protein Complexes: The Hemoglobin Case

    Directory of Open Access Journals (Sweden)

    Guang Hu

    2017-01-01

    Full Text Available The overall topology and interfacial interactions play key roles in understanding structural and functional principles of protein complexes. Elastic Network Model (ENM and Protein Contact Network (PCN are two widely used methods for high throughput investigation of structures and interactions within protein complexes. In this work, the comparative analysis of ENM and PCN relative to hemoglobin (Hb was taken as case study. We examine four types of structural and dynamical paradigms, namely, conformational change between different states of Hbs, modular analysis, allosteric mechanisms studies, and interface characterization of an Hb. The comparative study shows that ENM has an advantage in studying dynamical properties and protein-protein interfaces, while PCN is better for describing protein structures quantitatively both from local and from global levels. We suggest that the integration of ENM and PCN would give a potential but powerful tool in structural systems biology.

  18. Analysis and optimization of a camber morphing wing model

    Directory of Open Access Journals (Sweden)

    Bing Li

    2016-09-01

    Full Text Available This article proposes a camber morphing wing model that can continuously change its camber. A mathematical model is proposed and a kinematic simulation is performed to verify the wing’s ability to change camber. An aerodynamic model is used to test its aerodynamic characteristics. Some important aerodynamic analyses are performed. A comparative analysis is conducted to explore the relationships between aerodynamic parameters, the rotation angle of the trailing edge, and the angle of attack. An improved artificial fish swarm optimization algorithm is proposed, referred to as the weighted adaptive artificial fish-swarm with embedded Hooke–Jeeves search method. Some comparison tests are used to test the performance of the improved optimization algorithm. Finally, the proposed optimization algorithm is used to optimize the proposed camber morphing wing model.

  19. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  20. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  1. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    Science.gov (United States)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  2. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    Science.gov (United States)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  3. Wallerstein's World-Systems Analysis in Comparative Education: A Case Study

    Science.gov (United States)

    Griffiths, Tom G.; Knezevic, Lisa

    2010-01-01

    Since the 1970s, using his world-systems analysis, Immanuel Wallerstein has developed a wide-ranging framework for the social sciences, with potential applications for comparative educational research. In this paper we outline key aspects of Wallerstein's theorising, and then analyse the uptake, understandings, and applications of his analysis in…

  4. A comparative analysis of capacity adequacy policies

    International Nuclear Information System (INIS)

    Doorman, Gerard; Botterud, Audun; Wolfgang, Ove

    2007-06-01

    In this paper a stochastic dynamic optimization model is used to analyze the effect of different generation adequacy policies in restructured power systems. The expansion decisions of profit-maximizing investors are simulated under a number of different market designs: Energy Only with and without a price cap, Capacity Payment, Capacity Obligation, Capacity Subscription, and Demand Elasticity. The results show that the overall social welfare is reduced compared to a centralized social welfare optimization for all policies except Capacity Subscription and Demand Elasticity. In particular, an energy only market with a low price cap leads to a significant increase in involuntary load shedding. Capacity payments and obligations give additional investment incentives and more generating capacity, but also result in a considerable transfer of wealth from consumers to producers due to the capacity payments. Increased demand elasticity increases social welfare, but also results in a transfer from producers to consumers, compared to the theoretical social welfare optimum. In contrast, the capacity subscription policy increases the social welfare, and both producers and consumers benefit. This is possible because capacity subscription explicitly utilizes differences in consumers' preferences for uninterrupted supply. This advantage must be weighed against the cost of implementation, which is not included in the model.

  5. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  6. NHR dynamic analysis of control rod and fuel assembly of test model

    International Nuclear Information System (INIS)

    Wang Jiachun; Cai Laizhong

    2001-01-01

    The basic purpose is to analyze the dynamic response of the structure, with the seismic excitation, which is the important components of 200 MW Heating Reactor, including the control rod, fuel assembly, zirconium alloy boxes and the relevant parts. The author presents the simplification and building of the model. By comparing the effects under different constraint conditions, the final analyzed model is determined after the preliminary analysis. Then the model is calculated to obtain the frequencies of the model, the analysis of the response spectrum and the time series data under some seismic excitations. From the outcome what is received above, the influence of the basic frequency is discussed. And the displacement and acceleration responses of different sample points are obtained and analyzed to predict the safety of the reactor

  7. Comparative analysis of catfish BAC end sequences with the zebrafish genome

    Directory of Open Access Journals (Sweden)

    Abernathy Jason

    2009-12-01

    Full Text Available Abstract Background Comparative mapping is a powerful tool to transfer genomic information from sequenced genomes to closely related species for which whole genome sequence data are not yet available. However, such an approach is still very limited in catfish, the most important aquaculture species in the United States. This project was initiated to generate additional BAC end sequences and demonstrate their applications in comparative mapping in catfish. Results We reported the generation of 43,000 BAC end sequences and their applications for comparative genome analysis in catfish. Using these and the additional 20,000 existing BAC end sequences as a resource along with linkage mapping and existing physical map, conserved syntenic regions were identified between the catfish and zebrafish genomes. A total of 10,943 catfish BAC end sequences (17.3% had significant BLAST hits to the zebrafish genome (cutoff value ≤ e-5, of which 3,221 were unique gene hits, providing a platform for comparative mapping based on locations of these genes in catfish and zebrafish. Genetic linkage mapping of microsatellites associated with contigs allowed identification of large conserved genomic segments and construction of super scaffolds. Conclusion BAC end sequences and their associated polymorphic markers are great resources for comparative genome analysis in catfish. Highly conserved chromosomal regions were identified to exist between catfish and zebrafish. However, it appears that the level of conservation at local genomic regions are high while a high level of chromosomal shuffling and rearrangements exist between catfish and zebrafish genomes. Orthologous regions established through comparative analysis should facilitate both structural and functional genome analysis in catfish.

  8. Gentrification and models for real estate analysis

    Directory of Open Access Journals (Sweden)

    Gianfranco Brusa

    2013-08-01

    Full Text Available This research propose a deep analysis of Milanese real estate market, based on data supplied by three real estate organizations; gentrification appears in some neighborhoods, such as Tortona, Porta Genova, Bovisa, Isola Garibaldi: the latest is the subject of the final analysis, by surveying of physical and social state of the area. The survey takes place in two periods (2003 and 2009 to compare the evolution of gentrification. The results of surveys has been employed in a simulation by multi-agent system model, to foresee long term evolution of the phenomenon. These neighborhood micro-indicators allow to put in evidence actual trends, conditioning a local real estate market, which can translate themselves in phenomena such as gentrification. In present analysis, the employ of cellular automata models applied to a neighborhood in Milan (Isola Garibaldi produced the dynamic simulation of gentrification trend during a very long time: the cyclical phenomenon (one loop holds a period of twenty – thirty years appears sometimes during a theoretical time of 100 – 120 – 150 years. Simulation of long period scenarios by multi-agent systems and cellular automata provides estimator with powerful tool, without limits in implementing it, able to support him in appraisal judge. It stands also to reason that such a tool can sustain urban planning and related evaluation processes.

  9. Comparative analysis of the intestinal flora in type 2 diabetes and nondiabetic mice.

    Science.gov (United States)

    Horie, Masanori; Miura, Takamasa; Hirakata, Satomi; Hosoyama, Akira; Sugino, Sakiko; Umeno, Aya; Murotomi, Kazutoshi; Yoshida, Yasukazu; Koike, Taisuke

    2017-10-30

    A relationship between type 2 diabetes mellitus (T2DM) and intestinal flora has been suggested since development of analysis technology for intestinal flora. An animal model of T2DM is important for investigation of T2DM. Although there are some animal models of T2DM, a comparison of the intestinal flora of healthy animals with that of T2DM animals has not yet been reported. The intestinal flora of Tsumura Suzuki Obese Diabetes (TSOD) mice was compared with that of Tsumura, Suzuki, Non Obesity (TSNO) mice in the present study. The TSOD mice showed typical type 2 diabetes symptoms, which were high-fat diet-independent. The TSOD and the TSNO mouse models were derived from the same strain, ddY. In this study, we compared the intestinal flora of TSOD mice with that if TSNO mice at 5 and 12 weeks of age. We determined that that the number of operational taxonomic units (OTUs) was significantly higher in the cecum of TSOD mice than in that of TSNO mice. The intestinal flora of the cecum and that of the feces were similar between the TSNO and the TSOD strains. The dominant bacteria in the cecum and feces were of the phyla Firmicutes and Bacteroidetes. However, the content of some bacterial species varied between the two strains. The percentage of Lactobacillus spp. within the general intestinal flora was higher in TSOD mice than in TSNO mice. In contrast, the percentages of order Bacteroidales and family Lachnospiraceae were higher in TSNO mice than in TSOD mice. Some species were observed only in TSOD mice, such as genera Turicibacter and SMB53 (family Clostridiaceae), the percentage of which were 3.8% and 2.0%, respectively. Although further analysis of the metabolism of the individual bacteria in the intestinal flora is essential, genera Turicibacter and SMB53 may be important for the abnormal metabolism of type 2 diabetes.

  10. Financial analysis of technology acquisition using fractionated lasers as a model.

    Science.gov (United States)

    Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R

    2010-08-01

    Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.

  11. Comparative structural and functional analysis of genes encoding pectin methylesterases in Phytophthora spp.

    Science.gov (United States)

    Mingora, Christina; Ewer, Jason; Ospina-Giraldo, Manuel

    2014-03-15

    We have scanned the Phytophthora infestans, P. ramorum, and P. sojae genomes for the presence of putative pectin methylesterase genes and conducted a sequence analysis of all gene models found. We also searched for potential regulatory motifs in the promoter region of the proposed P. infestans models, and investigated the gene expression levels throughout the course of P. infestans infection on potato plants, using in planta and detached leaf assays. We found that genes located on contiguous chromosomal regions contain similar motifs in the promoter region, indicating the possibility of a shared regulatory mechanism. Results of our investigations also suggest that, during the pathogenicity process, the expression levels of some of the analyzed genes vary considerably when compared to basal expression observed in in vitro cultures of non-sporulating mycelium. These results were observed both in planta and in detached leaf assays. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  13. Comparative genomic hybridization analysis of benign and invasive male breast neoplasms

    DEFF Research Database (Denmark)

    Ojopi, Elida Paula Benquique; Cavalli, Luciane Regina; Cavalieri, Luciane Mara Bogline

    2002-01-01

    Comparative genomic hybridization (CGH) analysis was performed for the identification of chromosomal imbalances in two benign gynecomastias and one malignant breast carcinoma derived from patients with male breast disease and compared with cytogenetic analysis in two of the three cases. CGH...... analysis demonstrated overrepresentation of 8q in all three cases. One case of gynecomastia presented gain of 1p34.3 through pter, 11p14 through q12, and 17p11.2 through qter, and loss of 1q41 through qter and 4q33 through qter. The other gynecomastia presented del(1)(q41) as detected by both cytogenetic...

  14. Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11

    Science.gov (United States)

    Ashton, G.; Jones, D. I.; Prix, R.

    2016-05-01

    We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.

  15. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhangxing; Li, Yi; Shao, Yimin; Huang, Tianyu; Xu, Hongyi; Li, Yang; Chen, Wei; Zeng, Danielle; Avery, Katherine; Kang, HongTae; Su, Xuming

    2017-04-06

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus and the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.

  16. Comparative study of boron transport models in NRC Thermal-Hydraulic Code Trace

    Energy Technology Data Exchange (ETDEWEB)

    Olmo-Juan, Nicolás; Barrachina, Teresa; Miró, Rafael; Verdú, Gumersindo; Pereira, Claubia, E-mail: nioljua@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es, E-mail: claubia@nuclear.ufmg.br [Institute for Industrial, Radiophysical and Environmental Safety (ISIRYM). Universitat Politècnica de València (Spain); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Recently, the interest in the study of various types of transients involving changes in the boron concentration inside the reactor, has led to an increase in the interest of developing and studying new models and tools that allow a correct study of boron transport. Therefore, a significant variety of different boron transport models and spatial difference schemes are available in the thermal-hydraulic codes, as TRACE. According to this interest, in this work it will be compared the results obtained using the different boron transport models implemented in the NRC thermal-hydraulic code TRACE. To do this, a set of models have been created using the different options and configurations that could have influence in boron transport. These models allow to reproduce a simple event of filling or emptying the boron concentration in a long pipe. Moreover, with the aim to compare the differences obtained when one-dimensional or three-dimensional components are chosen, it has modeled many different cases using only pipe components or a mix of pipe and vessel components. In addition, the influence of the void fraction in the boron transport has been studied and compared under close conditions to BWR commercial model. A final collection of the different cases and boron transport models are compared between them and those corresponding to the analytical solution provided by the Burgers equation. From this comparison, important conclusions are drawn that will be the basis of modeling the boron transport in TRACE adequately. (author)

  17. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    Science.gov (United States)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal

  18. Model based population PK-PD analysis of furosemide for BP lowering effect: A comparative study in primary and secondary hypertension.

    Science.gov (United States)

    Shukla, Mahendra; Ibrahim, Moustafa M A; Jain, Moon; Jaiswal, Swati; Sharma, Abhisheak; Hanif, Kashif; Lal, Jawahar

    2017-11-15

    Though numerous reports have demonstrated multiple mechanisms by which furosemide can exert its anti-hypertensive response. However, lack of studies describing PK-PD relationship for furosemide featuring its anti-hypertensive property has limited its usage as a blood pressure (BP) lowering agent. Serum concentrations and mean arterial BP were monitored following 40 and 80mgkg -1 multiple oral dose of furosemide in spontaneously hypertensive rats (SHR) and DOCA-salt induced hypertensive (DOCA-salt) rats. A simultaneous population PK-PD relationship using E max model with effect compartment was developed to compare the anti-hypertensive efficacy of furosemide in these rat models. A two-compartment PK model with Weibull-type absorption and first-order elimination best described the serum concentration-time profile of furosemide. In the present study, post dose serum concentrations of furosemide were found to be lower than the EC 50 . The EC 50 predicted in DOCA-salt rats was found to be lower (4.5-fold), whereas the tolerance development was higher than that in SHR model. The PK-PD parameter estimates, particularly lower values of EC 50 , K e and Q in DOCA-salt rats as compared to SHR, pinpointed the higher BP lowering efficacy of furosemide in volume overload induced hypertensive conditions. Insignificantly altered serum creatinine and electrolyte levels indicated a favorable side effect profile of furosemide. In conclusion, the final PK-PD model described the data well and provides detailed insights into the use of furosemide as an anti-hypertensive agent. Copyright © 2017. Published by Elsevier B.V.

  19. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  20. A comparative study of multiple regression analysis and back ...

    Indian Academy of Sciences (India)

    Abhijit Sarkar

    artificial neural network (ANN) models to predict weld bead geometry and HAZ width in submerged arc welding ... Keywords. Submerged arc welding (SAW); multi-regression analysis (MRA); artificial neural network ..... Degree of freedom.