WorldWideScience

Sample records for models compare favorably

  1. Geothermal Favorability Map Derived From Logistic Regression Models of the Western United States (favorabilitysurface.zip)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a surface showing relative favorability for the presence of geothermal systems in the western United States. It is an average of 12 models that correlates...

  2. Using data from colloid transport experiments to parameterize filtration model parameters for favorable conditions

    Science.gov (United States)

    Kamai, Tamir; Nassar, Mohamed K.; Nelson, Kirk E.; Ginn, Timothy R.

    2017-04-01

    Colloid filtration in porous media spans across many disciplines and includes scenarios such as in-situ bioremediation, colloid-facilitated transport, water treatment of suspended particles and pathogenic bacteria, and transport of natural and engineered nanoparticles in the environment. Transport and deposition of colloid particles in porous media are determined by a combination of complex processes and forces. Given the convoluted physical, chemical, and biological processes involved, and the complexity of porous media in natural settings, it should not come as surprise that colloid filtration theory does not always sufficiently predict colloidal transport, and that there is still a pressing need for improved predictive capabilities. Here, instead of developing the macroscopic equation from pore-scale models, we parametrize the different terms in the macroscopic collection equation through fitting it to experimental data, by optimizing the parameters in the different terms of the equation. This way we combine a mechanistically-based filtration-equation with empirical evidence. The impact of different properties of colloids and porous media are studied by comparing experimental properties with different terms of the correlation equation. This comparison enables insight about different processes that occur during colloid transport and retention under in porous media under favorable conditions, and provides directions for future theoretical developments.

  3. Web-streamed didactic instruction on substance use disorders compares favorably with live-lecture format.

    Science.gov (United States)

    Karam-Hage, M; Maher, Karam-Hage; Brower, Kirk J; Mullan, Patricia B; Gay, Tamara; Gruppen, Larry D

    2013-05-01

    Education about substance use disorders in medical schools and, subsequently, physicians' identification of and intervention in these diagnoses lag behind that of most other disabling disorders. To reduce barriers and improve access to education about this major public health concern, medical schools are increasingly adopting web-based instruction on substance use and other psychiatric disorders as part of their curricula; however, it is not well known how a web-streamed lecture compares with a traditional one. The authors hypothesized that both these formats would be equally efficacious in terms of knowledge acquisition and student satisfaction. Authors conducted a prospective study to test this hypothesis among third-year medical students who received web-streamed lecture on substance use/addiction versus those who received a traditional live lecture. Of the 243 students, significantly more students completed the on-line lecture series. Of the 216 students in the final study sample, 130 (60%) were assigned to the web-streamed lecture and 86 (40%) to the live lecture. Within-subject comparisons of pre- and post-lecture scores for the entire cohort indicated a significant improvement in the percentage of correct answers (21.0% difference). Although no differences in improved scores between the two groups were found, students in the live-lecture group reported small, but significantly higher levels of satisfaction. This preliminary work supports the hypothesis that a web-streamed lecture can be at least equally efficacious as a traditional lecture in terms of knowledge acquisition. However, attention needs to be paid to the lower satisfaction levels associated with using the web-streamed format.

  4. Predicting favorable conditions for early leaf spot of peanut using output from the Weather Research and Forecasting (WRF) model.

    Science.gov (United States)

    Olatinwo, Rabiu O; Prabha, Thara V; Paz, Joel O; Hoogenboom, Gerrit

    2012-03-01

    Early leaf spot of peanut (Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.

  5. Predicting favorable conditions for early leaf spot of peanut using output from the Weather Research and Forecasting (WRF) model

    Science.gov (United States)

    Olatinwo, Rabiu O.; Prabha, Thara V.; Paz, Joel O.; Hoogenboom, Gerrit

    2012-03-01

    Early leaf spot of peanut ( Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.

  6. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    Directory of Open Access Journals (Sweden)

    Sveiczer Akos

    2006-03-01

    Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.

  7. Search for nucleon decay via modes favored by supersymmetric grand unification models in Super-Kamiokande-I

    CERN Document Server

    Kobayashi, K; Ashie, Y; Hosaka, J; Ishihara, K; Itow, Y; Kameda, J; Koshio, Y; Minamino, A; Mitsuda, C; Miura, M; Moriyama, S; Nakahata, M; Namba, T; Nambu, R; Obayashi, Y; Shiozawa, M; Suzuki, Y; Takeuchi, Y; Taki, K; Yamada, S; Ishitsuka, M; Kajita, T; Kaneyuki, K; Nakayama, S; Okada, A; Okumura, K; Ooyabu, T; Saji, C; Takenaga, Y; Desai, S; Kearns, E; Likhoded, S; Stone, J L; Sulak, L R; Wang, W; Goldhaber, M; Casper, D; Cravens, J P; Gajewski, W; Kropp, W R; Liu, D W; Mine, S; Smy, M B; Sobel, H W; Sterner, C W; Vagins, M R; Ganezer, K S; Hill, J E; Keig, W E; Jang, J S; Kim, J Y; Lim, I T; Scholberg, K; Walter, C W; Ellsworth, R W; Tasaka, S; Guillian, G; Kibayashi, A; Learned, J G; Matsuno, S; Takemori, D; Messier, M D; Hayato, Y; Ichikawa, A K; Ishida, T; Ishii, T; Iwashita, T; Kobayashi, T; Maruyama, T; Nakamura, K; Nitta, K; Oyama, Y; Sakuda, M; Totsuka, Y; Suzuki, A T; Hasegawa, M; Hayashi, K; Kato, I; Maesaka, H; Morita, T; Nakadaira, T; Nakaya, T; Nishikawa, K; Sasaki, T; Ueda, S; Yamamoto, S; Yokoyama, M; Haines, T J; Dazeley, S; Hatakeyama, S; Svoboda, R; Blaufuss, E; Goodman, J A; Sullivan, G W; Turcan, D; Habig, A; Fukuda, Y; Jung, C K; Kato, T; Malek, M; Mauger, C; McGrew, C; Sarrat, A; Sharkey, E; Yanagisawa, C; Toshito, T; Miyano, K; Tamura, N; Ishii, J; Kuno, Y; Yoshida, M; Kim, S B; Yoo, J; Okazawa, H; Ishizuka, T; Choi, Y; Seo, H K; Gando, Y; Hasegawa, T; Inoue, K; Shirai, J; Suzuki, A; Koshiba, M; Nakajima, Y; Nishijima, K; Harada, T; Ishino, H; Watanabe, Y; Kielczewska, D; Zalipska, J; Berns, H G; Gran, R; Shiraishi, K K; Stachyra, A; Washburn, K; Wilkes, R J

    2005-01-01

    We report the results for nucleon decay searches via modes favored by supersymmetric grand unified models in Super-Kamiokande. Using 1489 days of full Super-Kamiokande-I data, we searched for $p \\to \\bar{\

  8. Treatment tolerance and patient-reported outcomes favor online hemodiafiltration compared to high-flux hemodialysis in the elderly.

    Science.gov (United States)

    Morena, Marion; Jaussent, Audrey; Chalabi, Lotfi; Leray-Moragues, Hélène; Chenine, Leila; Debure, Alain; Thibaudin, Damien; Azzouz, Lynda; Patrier, Laure; Maurice, Francois; Nicoud, Philippe; Durand, Claude; Seigneuric, Bruno; Dupuy, Anne-Marie; Picot, Marie-Christine; Cristol, Jean-Paul; Canaud, Bernard

    2017-03-15

    Large cohort studies suggest that high convective volumes associated with online hemodiafiltration may reduce the risk of mortality/morbidity compared to optimal high-flux hemodialysis. By contrast, intradialytic tolerance is not well studied. The aim of the FRENCHIE (French Convective versus Hemodialysis in Elderly) study was to compare high-flux hemodialysis and online hemodiafiltration in terms of intradialytic tolerance. In this prospective, open-label randomized controlled trial, 381 elderly chronic hemodialysis patients (over age 65) were randomly assigned in a one-to-one ratio to either high-flux hemodialysis or online hemodiafiltration. The primary outcome was intradialytic tolerance (day 30-day 120). Secondary outcomes included health-related quality of life, cardiovascular risk biomarkers, morbidity, and mortality. During the observational period for intradialytic tolerance, 85% and 84% of patients in high-flux hemodialysis and online hemodiafiltration arms, respectively, experienced at least one adverse event without significant difference between groups. As exploratory analysis, intradialytic tolerance was also studied, considering the sessions as a statistical unit according to treatment actually received. Over a total of 11,981 sessions, 2,935 were complicated by the occurrence of at least one adverse event, with a significantly lower occurrence in online hemodiafiltration with fewer episodes of intradialytic symptomatic hypotension and muscle cramps. By contrast, health-related quality of life, morbidity, and mortality were not different in both groups. An improvement in the control of metabolic bone disease biomarkers and β2-microglobulin level without change in serum albumin concentration was observed with online hemodiafiltration. Thus, overall outcomes favor online hemodiafiltration over high-flux hemodialysis in the elderly.

  9. Deficit of sand in a sediment transport model favors coral reef development in Brazil

    Directory of Open Access Journals (Sweden)

    Abílio C.S.P. Bittencourt

    2008-03-01

    Full Text Available This paper shows that the location of the shoreface bank reefs along the northeastern and eastern coasts of Brazil, in a first order approximation, seem to be controlled by the deficit of sediment in the coastal system. The sediment transport pattern defined by a numerical modeling of wave refraction diagrams, representing circa 2000 km of the northeastern and eastern coasts of Brazil, permitted the regional-scale reproduction of several drift cells of net longshore sediment transport. Those drift cells can reasonably explain the coastal sections that present sediment surplus or sediment deficit, which correspond, respectively, to regions where there is deposition and erosion or little/no deposition of sand. The sediment deficit allows the exposure and maintenance of rocky substrates to be free of sediment, a favorable condition for the fixation and development of coral larvae.Este trabalho mostra que a localização dos recifes de coral ao longo dos litorais leste e nordeste do Brasil, em uma aproximação de primeira ordem, parece ser controlada pelo déficit de sedimentos no sistema costeiro. O padrão de transporte de sedimentos definido por modelagem numérica a partir de diagramas de refração de ondas, representando cerca de 2000 km dos litorais leste e nordeste do Brasil, permitiu a reprodução, em escala regional, de várias células de deriva litorânea efetiva de sedimentos. Essas células de deriva podem razoavelmente explicar os segmentos costeiros que representam superávit, ou deficit de sedimentos que correspondem, respectivamente, a regiões onde existe deposição e erosão ou pouca/nenhuma deposição de areia. O deficit de sedimentos propicia a exposição e manutenção de substratos rochosos livres de sedimento, uma condição favorável para a fixação e desenvolvimento das larvas de coral.

  10. Protein Models Comparator

    CERN Document Server

    Widera, Paweł

    2011-01-01

    The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...

  11. Time-based partitioning model for predicting neurologically favorable outcome among adults with witnessed bystander out-of-hospital CPA.

    Directory of Open Access Journals (Sweden)

    Toshikazu Abe

    Full Text Available BACKGROUND: Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. METHODS AND FINDINGS: From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648 who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance or two (moderate cerebral disability of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605 to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC; and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043, 209/2,292 (9.1% in all patients with the acceptable time intervals and 1,388/2,706 (52.1% in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. CONCLUSIONS: Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.

  12. Loblolly pine growth following operational vegetation management treatments compares favorably to that achieved in complete vegetation control research trials

    Science.gov (United States)

    Dwight K. Lauer; Harold E. Quicke

    2010-01-01

    Different combinations of chemical site prep and post-plant herbaceous weed control installed at three Upper Coastal Plain locations were compared in terms of year 3 loblolly (Pinus taeda L.) pine response to determine the better vegetation management regimes. Site prep treatments were different herbicide rates applied in either July or October. Site...

  13. Acceptability of a low-fat vegan diet compares favorably to a step II diet in a randomized, controlled trial.

    Science.gov (United States)

    Barnard, Neal D; Scialli, Anthony R; Turner-McGrievy, Gabrielle; Lanou, Amy J

    2004-01-01

    This study aimed to assess the acceptability of a low-fat vegan diet, as compared with a more typical fat-modified diet, among overweight and obese adults. Through newspaper advertisements, 64 overweight, postmenopausal women were recruited, 59 of whom completed the study. The participants were assigned randomly to a low-fat vegan diet or, for comparison, to a National Cholesterol Education Program Step II (NCEP) diet. At baseline and 14 weeks later, dietary intake, dietary restraint, disinhibition, and hunger, as well as the acceptability and perceived benefits and adverse effects of each diet were assessed. Dietary restraint increased in the NCEP group (P vegan group. Disinhibition and hunger scores fell in each group (P vegan group participants rated their diet as less easy to prepare than their usual diets (P vegan diet is high and not demonstrably different from that of a more moderate low-fat diet among well-educated, postmenopausal women in a research environment.

  14. Mixed observation favors motor learning through better estimation of the model's performance.

    Science.gov (United States)

    Andrieux, Mathieu; Proteau, Luc

    2014-10-01

    Observation contributes to motor learning. It was recently demonstrated that the observation of both a novice and an expert model (mixed observation) resulted in better learning of a complex spatio-temporal task than the observation of either a novice or an expert model alone. In the present study, we sought to determine whether the advantage of mixed observation resulted from the development of a better error detection mechanism. The results revealed that mixed observation resulted in a better estimation of the model's performance than that with other regimens of observation. The results also suggest that observational learning is improved when observation with knowledge of the results (KR) is followed by an observation phase without KR.

  15. Reparametrization of the least favorable submodel in semi-parametric multisample models

    CERN Document Server

    Hirose, Yuichi; 10.3150/10-BEJ342

    2012-01-01

    The method of estimation in Scott and Wild (Biometrika 84 (1997) 57--71 and J. Statist. Plann. Inference 96 (2001) 3--27) uses a reparametrization of the profile likelihood that often reduces the computation times dramatically. Showing the efficiency of estimators for this method has been a challenging problem. In this paper, we try to solve the problem by investigating conditions under which the efficient score function and the efficient information matrix can be expressed in terms of the parameters in the reparametrized model.

  16. Favorable pendant-amino metal chelation in VX nerve agent model systems.

    Science.gov (United States)

    Bandyopadhyay, Indrajit; Kim, Min Jeong; Lee, Yoon Sup; Churchill, David G

    2006-03-16

    We have performed DFT computational studies [B3LYP, 6-31+G] to obtain metal ion coordination isomers of VX-Me [MeP(O)(OMe)(SCH2CH2NMe2)], a model of two of the most lethal nerve agents: VX [MeP(O)(OEt)(SCH2CH2N(iPr)2)] and Russian-VX [MeP(O)(OCH2CHMe2)(SCH2CH2N(Et)2)]. Our calculations involved geometry optimizations of the neutral VX-Me model as well as complexes with H+, Li+, Na+, K+, Be2+, Mg2+, and Ca2+ that yielded 2-8 different stable chelation modes for each ion that involved mainly mono- and bidentate binding. Importantly, our studies revealed that the [O(P),N] bidentate binding mode, long thought to be the active mode in differentiating the hydrolytic path of VX from other nerve agents, was the most stable for all ions studied here. Binding energy depended mainly on ionic size as well as charge, with binding energies ranging from 364 kcal mol(-1) for Be2+ to 33 kcal mol(-1) for K+. Furthermore, calculated NMR shifts for VX-Me correlate to experimental values of VX.

  17. Comparing root architectural models

    Science.gov (United States)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  18. Simulating the 'other-race' effect with autoassociative neural networks: further evidence in favor of the face-space model.

    Science.gov (United States)

    Caldara, Roberto; Hervé, Abdi

    2006-01-01

    Other-race (OR) faces are less accurately recognized than same-race (SR) faces, but faster classified by race. This phenomenon has often been reported as the 'other-race' effect (ORE). Valentine (1991 Quarterly Journal of Experimental Psychology A: Human Experimental Psychology 43 161-204) proposed a theoretical multidimensional face-space model that explained both of these results, in terms of variations in exemplar density between races. According to this model, SR faces are more widely distributed across the dimensions of the space than OR faces. However, this model does not quantify nor state the dimensions coded within this face space. The aim of the present study was to test the face-space explanation of the ORE with neural network simulations by quantifying its dimensions. We found the predicted density properties of Valentine's framework in the face-projection spaces of the autoassociative memories. This was supported by an interaction for exemplar density between the race of the learned face set and the race of the faces. In addition, the elaborated face representations showed optimal responses for SR but not for OR faces within SR face spaces when explored at the individual level, as gender errors occurred significantly more often in OR than in SR face-space representations. Altogether, our results add further evidence in favor of a statistical exemplar density explanation of the ORE as suggested by Valentine, and question the plausibility of such coding for faces in the framework of recent neuroimaging studies.

  19. Genetic network models: a comparative study

    Science.gov (United States)

    van Someren, Eugene P.; Wessels, Lodewyk F. A.; Reinders, Marcel J. T.

    2001-06-01

    Currently, the need arises for tools capable of unraveling the functionality of genes based on the analysis of microarray measurements. Modeling genetic interactions by means of genetic network models provides a methodology to infer functional relationships between genes. Although a wide variety of different models have been introduced so far, it remains, in general, unclear what the strengths and weaknesses of each of these approaches are and where these models overlap and differ. This paper compares different genetic modeling approaches that attempt to extract the gene regulation matrix from expression data. A taxonomy of continuous genetic network models is proposed and the following important characteristics are suggested and employed to compare the models: inferential power; predictive power; robustness; consistency; stability and computational cost. Where possible, synthetic time series data are employed to investigate some of these properties. The comparison shows that although genetic network modeling might provide valuable information regarding genetic interactions, current models show disappointing results on simple artificial problems. For now, the simplest models are favored because they generalize better, but more complex models will probably prevail once their bias is more thoroughly understood and their variance is better controlled.

  20. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    OpenAIRE

    Sveiczer Akos; Buchwald Peter

    2006-01-01

    Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential) F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, whi...

  1. Comparing different dynamic stall models

    Energy Technology Data Exchange (ETDEWEB)

    Holierhoek, J.G. [Unit Wind Energy, Energy research Centre of the Netherlands, ZG, Petten (Netherlands); De Vaal, J.B.; Van Zuijlen, A.H.; Bijl, H. [Aerospace Engineering, Delft University of Technology, Delft (Netherlands)

    2012-07-16

    The dynamic stall phenomenon and its importance for load calculations and aeroelastic simulations is well known. Different models exist to model the effect of dynamic stall; however, a systematic comparison is still lacking. To investigate if one is performing better than another, three models are used to simulate the Ohio State University measurements and a set of data from the National Aeronautics and Space Administration Ames experimental study of dynamic stall and compare results. These measurements were at conditions and for aerofoils that are typical for wind turbines, and the results are publicly available. The three selected dynamic stall models are the ONERA model, the Beddoes-Leishman model and the Snel model. The simulations show that there are still significant differences between measurements and models and that none of the models is significantly better in all cases than the other models. Especially in the deep stall regime, the accuracy of each of the dynamic stall models is limited.

  2. Comparative Protein Structure Modeling Using MODELLER.

    Science.gov (United States)

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  3. The sizes of mini-voids in the local universe: an argument in favor of a warm dark matter model?

    CERN Document Server

    Tikhonov, A V; Yepes, G; Hoffman, Y

    2009-01-01

    Using high-resolution simulations within the Cold and Warm Dark Matter models we study the evolution of small scale structure in the Local Volume, a sphere of 8 Mpc radius around the Local Group. We compare the observed spectrum of mini-voids in the Local Volume with the spectrum of mini-voids determined from the simulations. We show that the \\LWDM model can easily explain both the observed spectrum of mini-voids and the presence of low-mass galaxies observed in the Local Volume, provided that all haloes with circular velocities greater than 20 km/s host galaxies. On the contrary within the \\LCDM model the distribution of the simulated mini-voids reflects the observed one if haloes with maximal circular velocities larger than $35 \\kms$ host galaxies. This assumption is in contradiction with observations of galaxies with circular velocities as low as 20 km/s in our Local Universe. A potential problem of the \\LWDM model could be the late formation of the haloes in which the gas can be efficiently photo-evaporat...

  4. The Genetic Mechanism and Model of Deep-Basin Gas Accumulation and Methods for Predicting the Favorable Areas

    Institute of Scientific and Technical Information of China (English)

    WANG Tao; PANG Xiongqi; MA Xinhua; JIN Zhijun; JIANG Zhenxue

    2003-01-01

    As a kind of abnormal natural gas formed with special mechanism, the deep-basin gas, accumulated in thelower parts of a basin or syncline and trapped by a tight reservoir, has such characteristics as gas-water inversion, abnormalpressure, continuous distribution and tremendous reserves. Being a geological product of the evolution of petroliferousbasins by the end of the middle-late stages, the formation of a deep-basin gas accumulation must meet four conditions, i.e.,continuous and sufficient gas supply, tight reservoirs in continuous distribution, good sealing caps and stable structures.The areas, where the expansion force of natural gas is smaller than the sum of the capillary force and the hydrostaticpressure within tight reservoirs, are favorable for forming deep-basin gas pools. The range delineated by the above twoforces corresponds to that of the deep-basin gas trap. Within the scope of the deep-basin gas trap, the balance relationshipbetween the amounts of ingoing and overflowing gases determines the gas-bearing area of the deep-basin gas pool. The gasvolume in regions with high porosity and high permeability is worth exploring under current technical conditions and it isequivalent to the practical resources (about 10%-20% of the deep-basin gas). Based on studies of deep-basin gasformation conditions, the theory of force balance and the equation of material balance, the favorable areas and gas-containing ranges, as well as possible gas-rich regions are preliminarily predicted in the deep-basin gas pools in the UpperPaleozoic He-8 segment of the Ordos basin.

  5. High host density favors greater virulence: a model of parasite-host dynamics based on multi-type branching processes

    CERN Document Server

    Borovkov, Konstantin; Rice, Timothy

    2011-01-01

    We use a multitype continuous time Markov branching process model to describe the dynamics of the spread of parasites of two types that can mutate into each other in a common host population. Instead of using a single virulence characteristic which is typical of most mathematical models for infectious diseases, our model uses a combination of two characteristics: lethality and transmissibility. This makes the model capable of reproducing the empirically observed fact that the increase in the host density can lead to the prevalence of the more virulent pathogen type. We provide some numerical illustrations and discuss the effects of the size of the enclosure containing the host population on the encounter rate in our model that plays the key role in determining what pathogen type will eventually prevail. We also present a multistage extension of the model to situations where there are several populations and parasites can be transmitted from one of them to another.

  6. Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Grant B. Morgan

    2015-02-01

    Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.

  7. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  8. Gaze patterns reveal how texts are remembered: A mental model of what was described is favored over the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    Several studies have reported that spontaneous eye movements occur when visuospatial information is recalled from memory. Such gazes closely reflect the content and spatial relations from the original scene layout (e.g., Johansson et al., 2012). However, when someone has originally read a scene...... description, the memory of the physical layout of the text itself might compete with the memory of the spatial arrangement of the described scene. 
The present study was designed to address this fundamental issue by having participants read scene descriptions that were manipulated to be either congruent....... Recollection was performed orally while gazing at a blank screen. 
Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. We conclude that participants formed a mental model that represents the content...

  9. The B/C and sub-Iron/Iron Cosmic ray ratios - further evidence in favor of the spiral arm diffusion model

    CERN Document Server

    Benyamin, David; Piran, Tsvi; Shaviv, Nir j

    2016-01-01

    The Boron to Carbon (B/C) and sub-Fe/Fe ratios provides an important clue on Cosmic Ray (CR) propagation within the Galaxy. These ratios estimate the grammage that the CR path as they propagate from their sources to Earth. Attempts to explain these ratios within the standard CR propagation models require ad hoc modifications and even with those these models necessitate inconsistent grammages to explain both ratios. As an alternative, physically motivated model, we have proposed that CR originate preferably within the galactic spiral arms. CR propagation from dynamic spiral arms has important imprints on various secondary to primary ratios, such as the B/C ratio and the positron fraction. We use our spiral arm diffusion model with the spallation network extended up to Nickel to calculate the sub-Fe/Fe ratio. We show that without any additional parameters the spiral arm model consistently explains both ratios with the same grammage, providing further evidence in favor of this model.

  10. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  11. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  12. Comparative dynamics in a health investment model.

    Science.gov (United States)

    Eisenring, C

    1999-10-01

    The method of comparative dynamics fully exploits the inter-temporal structure of optimal control models. I derive comparative dynamic results in a simplified demand for health model. The effect of a change in the depreciation rate on the optimal paths for health capital and investment in health is studied by use of a phase diagram.

  13. Loss Given Default Modelling: Comparative Analysis

    OpenAIRE

    Yashkir, Olga; Yashkir, Yuriy

    2013-01-01

    In this study we investigated several most popular Loss Given Default (LGD) models (LSM, Tobit, Three-Tiered Tobit, Beta Regression, Inflated Beta Regression, Censored Gamma Regression) in order to compare their performance. We show that for a given input data set, the quality of the model calibration depends mainly on the proper choice (and availability) of explanatory variables (model factors), but not on the fitting model. Model factors were chosen based on the amplitude of their correlati...

  14. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  15. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  16. Major models used in comparative management

    OpenAIRE

    Ioan Constantin Dima; Codruta Dura

    2001-01-01

    Comparative management literature emphasizes the following models: Farmer-Richman Model (based on the assumption that environment represents the main factor whom influence upon management is decisive); Rosalie Tung Model (using the following variables:environment,or extra-organisational variables, intra-organisational variables, personal and result variables); Child Model (including the three determinative domains-contingency, culture and economic system-threated as items objectively connecte...

  17. Bofu-tsu-shosan, an oriental herbal medicine, exerts a combinatorial favorable metabolic modulation including antihypertensive effect on a mouse model of human metabolic disorders with visceral obesity.

    Directory of Open Access Journals (Sweden)

    Kengo Azushima

    Full Text Available Accumulating evidence indicates that metabolic dysfunction with visceral obesity is a major medical problem associated with the development of hypertension, type 2 diabetes (T2DM and dyslipidemia, and ultimately severe cardiovascular and renal disease. Therefore, an effective anti-obesity treatment with a concomitant improvement in metabolic profile is important for the treatment of metabolic dysfunction with visceral obesity. Bofu-tsu-shosan (BOF is one of oriental herbal medicine and is clinically available to treat obesity in Japan. Although BOF is a candidate as a novel therapeutic strategy to improve metabolic dysfunction with obesity, the mechanism of its beneficial effect is not fully elucidated. Here, we investigated mechanism of therapeutic effects of BOF on KKAy mice, a model of human metabolic disorders with obesity. Chronic treatment of KKAy mice with BOF persistently decreased food intake, body weight gain, low-density lipoprotein cholesterol and systolic blood pressure. In addition, both tissue weight and cell size of white adipose tissue (WAT were decreased, with concomitant increases in the expression of adiponectin and peroxisome proliferator-activated receptors genes in WAT as well as the circulating adiponectin level by BOF treatment. Furthermore, gene expression of uncoupling protein-1, a thermogenesis factor, in brown adipose tissue and rectal temperature were both elevated by BOF. Intriguingly, plasma acylated-ghrelin, an active form of orexigenic hormone, and short-term food intake were significantly decreased by single bolus administration of BOF. These results indicate that BOF exerts a combinatorial favorable metabolic modulation including antihypertensive effect, at least partially, via its beneficial effect on adipose tissue function and its appetite-inhibitory property through suppression on the ghrelin system.

  18. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  19. Comparing repetition-based melody segmentation models

    NARCIS (Netherlands)

    Rodríguez López, M.E.; de Haas, Bas; Volk, Anja

    2014-01-01

    This paper reports on a comparative study of computational melody segmentation models based on repetition detection. For the comparison we implemented five repetition-based segmentation models, and subsequently evaluated their capacity to automatically find melodic phrase boundaries in a corpus of 2

  20. A microbial model of economic trading and comparative advantage.

    Science.gov (United States)

    Enyeart, Peter J; Simpson, Zachary B; Ellington, Andrew D

    2015-01-07

    The economic theory of comparative advantage postulates that beneficial trading relationships can be arrived at by two self-interested entities producing the same goods as long as they have opposing relative efficiencies in producing those goods. The theory predicts that upon entering trade, in order to maximize consumption both entities will specialize in producing the good they can produce at higher efficiency, that the weaker entity will specialize more completely than the stronger entity, and that both will be able to consume more goods as a result of trade than either would be able to alone. We extend this theory to the realm of unicellular organisms by developing mathematical models of genetic circuits that allow trading of a common good (specifically, signaling molecules) required for growth in bacteria in order to demonstrate comparative advantage interactions. In Conception 1, the experimenter controls production rates via exogenous inducers, allowing exploration of the parameter space of specialization. In Conception 2, the circuits self-regulate via feedback mechanisms. Our models indicate that these genetic circuits can demonstrate comparative advantage, and that cooperation in such a manner is particularly favored under stringent external conditions and when the cost of production is not overly high. Further work could involve implementing the models in living bacteria and searching for naturally occurring cooperative relationships between bacteria that conform to the principles of comparative advantage.

  1. Comparing models of Red Knot population dynamics

    Science.gov (United States)

    McGowan, Conor

    2015-01-01

    Predictive population modeling contributes to our basic scientific understanding of population dynamics, but can also inform management decisions by evaluating alternative actions in virtual environments. Quantitative models mathematically reflect scientific hypotheses about how a system functions. In Delaware Bay, mid-Atlantic Coast, USA, to more effectively manage horseshoe crab (Limulus polyphemus) harvests and protect Red Knot (Calidris canutus rufa) populations, models are used to compare harvest actions and predict the impacts on crab and knot populations. Management has been chiefly driven by the core hypothesis that horseshoe crab egg abundance governs the survival and reproduction of migrating Red Knots that stopover in the Bay during spring migration. However, recently, hypotheses proposing that knot dynamics are governed by cyclical lemming dynamics garnered some support in data analyses. In this paper, I present alternative models of Red Knot population dynamics to reflect alternative hypotheses. Using 2 models with different lemming population cycle lengths and 2 models with different horseshoe crab effects, I project the knot population into the future under environmental stochasticity and parametric uncertainty with each model. I then compare each model's predictions to 10 yr of population monitoring from Delaware Bay. Using Bayes' theorem and model weight updating, models can accrue weight or support for one or another hypothesis of population dynamics. With 4 models of Red Knot population dynamics and only 10 yr of data, no hypothesis clearly predicted population count data better than another. The collapsed lemming cycle model performed best, accruing ~35% of the model weight, followed closely by the horseshoe crab egg abundance model, which accrued ~30% of the weight. The models that predicted no decline or stable populations (i.e. the 4-yr lemming cycle model and the weak horseshoe crab effect model) were the most weakly supported.

  2. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  3. Comparing and Using Occupation-Focused Models.

    Science.gov (United States)

    Wong, Su Ren; Fisher, Gail

    2015-01-01

    As health care moves toward understanding the importance of function, participation and occupation, occupational therapists would be well served to use occupation-focused theories to guide intervention. Most therapists understand that applying occupation-focused models supports best practice, but many do not routinely use these models. Barriers to application of theory include lack of understanding of the models and limited strategies to select and apply them for maximum client benefit. The aim of this article is to compare occupation-focused models and provide recommendations on how to choose and combine these models in practice; and to provide a systematic approach for integrating occupation-focused models with frames of reference to guide assessment and intervention.

  4. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  5. Using occupancy modeling and logistic regression to assess the distribution of shrimp species in lowland streams, Costa Rica: Does regional groundwater create favorable habitat?

    Science.gov (United States)

    Snyder, Marcia; Freeman, Mary C.; Purucker, S. Thomas; Pringle, Catherine M.

    2016-01-01

    Freshwater shrimps are an important biotic component of tropical ecosystems. However, they can have a low probability of detection when abundances are low. We sampled 3 of the most common freshwater shrimp species, Macrobrachium olfersii, Macrobrachium carcinus, and Macrobrachium heterochirus, and used occupancy modeling and logistic regression models to improve our limited knowledge of distribution of these cryptic species by investigating both local- and landscape-scale effects at La Selva Biological Station in Costa Rica. Local-scale factors included substrate type and stream size, and landscape-scale factors included presence or absence of regional groundwater inputs. Capture rates for 2 of the sampled species (M. olfersii and M. carcinus) were sufficient to compare the fit of occupancy models. Occupancy models did not converge for M. heterochirus, but M. heterochirus had high enough occupancy rates that logistic regression could be used to model the relationship between occupancy rates and predictors. The best-supported models for M. olfersii and M. carcinus included conductivity, discharge, and substrate parameters. Stream size was positively correlated with occupancy rates of all 3 species. High stream conductivity, which reflects the quantity of regional groundwater input into the stream, was positively correlated with M. olfersii occupancy rates. Boulder substrates increased occupancy rate of M. carcinus and decreased the detection probability of M. olfersii. Our models suggest that shrimp distribution is driven by factors that function at local (substrate and discharge) and landscape (conductivity) scales.

  6. “Covalent Hydration” Reactions in Model Monomeric Ru 2,2'-Bipyridine Complexes: Thermodynamic Favorability as a Function of Metal Oxidation and Overall Spin States

    Energy Technology Data Exchange (ETDEWEB)

    Ozkanlar, Abdullah; Cape, Jonathan L.; Hurst, James K.; Clark, Aurora E.

    2011-09-05

    Density functional theory (DFT) has been used to investigate the plausibility of water addition to the simple mononuclear ruthenium complexes, [(NH{sub 3}){sub 3}(bpy)Ru=O]{sup 2+}/{sup 3+} and [(NH{sub 3}){sub 3}(bpy)RuOH]{sup 3+}, in which the OH fragment adds to the 2,2{prime}-bipyridine (bpy) ligand. Activation of bpy toward water addition has frequently been postulated within the literature, although there exists little definitive experimental evidence for this type of 'covalent hydration'. In this study, we examine the energetic dependence of the reaction upon metal oxidation state, overall spin state of the complex, as well as selectivity for various positions on the bipyridine ring. The thermodynamic favorability is found to be highly dependent upon all three parameters, with free energies of reaction that span favorable and unfavorable regimes. Aqueous addition to [(NH{sub 3}){sub 3}(bpy)Ru=O]{sup 3+} was found to be highly favorable for the S = 1/2 state, while reduction of the formal oxidation state on the metal center makes the reaction highly unfavorable. Examination of both facial and meridional isomers reveals that when bipyridine occupies the position trans to the ruthenyl oxo atom, reactivity toward OH addition decreases and the site preferences are altered. The electronic structure and spectroscopic signatures (EPR parameters and simulated spectra) have been determined to aid in recognition of 'covalent hydration' in experimental systems. EPR parameters are found to uniquely characterize the position of the OH addition to the bpy as well as the overall spin state of the system.

  7. Comparing numerically exact and modelled static friction

    Directory of Open Access Journals (Sweden)

    Krengel Dominik

    2017-01-01

    Full Text Available Currently there exists no mechanically consistent “numerically exact” implementation of static and dynamic Coulomb friction for general soft particle simulations with arbitrary contact situations in two or three dimension, but only along one dimension. We outline a differential-algebraic equation approach for a “numerically exact” computation of friction in two dimensions and compare its application to the Cundall-Strack model in some test cases.

  8. Comparative analysis of Goodwin's business cycle models

    Science.gov (United States)

    Antonova, A. O.; Reznik, S.; Todorov, M. D.

    2016-10-01

    We compare the behavior of solutions of Goodwin's business cycle equation in the form of neutral delay differential equation with fixed delay (NDDE model) and in the form of the differential equations of 3rd, 4th and 5th orders (ODE model's). Such ODE model's (Taylor series expansion of NDDE in powers of θ) are proposed in N. Dharmaraj and K. Vela Velupillai [6] for investigation of the short periodic sawthooth oscillations in NDDE. We show that the ODE's of 3rd, 4th and 5th order may approximate the asymptotic behavior of only main Goodwin's mode, but not the sawthooth modes. If the order of the Taylor series expansion exceeds 5, then the approximate ODE becomes unstable independently of time lag θ.

  9. Minimalist models for proteins: a comparative analysis.

    Science.gov (United States)

    Tozzini, Valentina

    2010-08-01

    The last decade has witnessed a renewed interest in the coarse-grained (CG) models for biopolymers, also stimulated by the needs of modern molecular biology, dealing with nano- to micro-sized bio-molecular systems and larger than microsecond timescale. This combination of size and timescale is, in fact, hard to access by atomic-based simulations. Coarse graining the system is a route to be followed to overcome these limits, but the ways of practically implementing it are many and different, making the landscape of CG models very vast and complex. In this paper, the CG models are reviewed and their features, applications and performances compared. This analysis, restricted to proteins, focuses on the minimalist models, namely those reducing at minimum the number of degrees of freedom without losing the possibility of explicitly describing the secondary structures. This class includes models using a single or a few interacting centers (beads) for each amino acid. From this analysis several issues emerge. The difficulty in building these models resides in the need for combining transferability/predictive power with the capability of accurately reproducing the structures. It is shown that these aspects could be optimized by accurately choosing the force field (FF) terms and functional forms, and combining different parameterization procedures. In addition, in spite of the variety of the minimalist models, regularities can be found in the parameters values and in FF terms. These are outlined and schematically presented with the aid of a generic phase diagram of the polypeptide in the parameter space and, hopefully, could serve as guidelines for the development of minimalist models incorporating the maximum possible level of predictive power and structural accuracy.

  10. Danish holsteins favor bull offspring

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Kirkeby, Carsten; Nielsen, Søren Saxmose

    2015-01-01

    In a previous study from 2014 it was found that US Holstein cows that gave birth to heifer calves produced more milk than cows having bull calves. We wanted to assess whether this is also true for Danish cattle. Data from 578 Danish Holstein herds were analysed with a mixed effect model and contr......In a previous study from 2014 it was found that US Holstein cows that gave birth to heifer calves produced more milk than cows having bull calves. We wanted to assess whether this is also true for Danish cattle. Data from 578 Danish Holstein herds were analysed with a mixed effect model...... and contrary to the findings in the US, we found that cows produced higher volumes of milk if they had a bull calf compared to a heifer calf. We found a significantly higher milk production of 0.28% in the first lactation period for cows giving birth to a bull calf, compared to a heifer calf. This difference...... was even higher when cows gave birth to another bull calf, so having two bull calves resulted in a difference of 0.52% in milk production compared to any other combination of sex of the offspring. Furthermore, we found that farmer assisted calvings were associated with a higher milk yield. Cows...

  11. Accuracy of stereolithographically printed digital models compared to plaster models.

    Science.gov (United States)

    Camardella, Leonardo Tavares; Vilella, Oswaldo V; van Hezel, Marleen M; Breuning, Karel H

    2017-03-30

    This study compared the accuracy of plaster models from alginate impressions and printed models from intraoral scanning. A total of 28 volunteers were selected and alginate impressions and intraoral scans were used to make plaster models and digital models of their dentition, respectively. The digital models were printed using a stereolithographic (SLA) 3D printer with a horseshoe-shaped design. Two calibrated examiners measured distances on the plaster and printed models with a digital caliper. The paired t test was used to determine intraobserver error and compare the measurements. The Pearson correlation coefficient was used to evaluate the reliability of measurements for each model type. The measurements on plaster models and printed models show some significant differences in tooth dimensions and interarch parameters, but these differences were not clinically relevant, except for the transversal measurements. The upper and lower intermolar distances on the printed models were statistically significant and clinically relevant smaller. Printed digital models with the SLA 3D printer studied, with a horseshoe-shaped base made from intraoral scans cannot replace conventional plaster models from alginate impressions in orthodontics for diagnosis and treatment planning because of their clinically relevant transversal contraction.

  12. The perception of the illness with subsequent outcome measure in more favorable in continuos peritoneal dialysis vs hemodialysis in the framework of appraisal model of stress.

    Science.gov (United States)

    Nowak, Zbigniew; Laudański, Krzysztof

    2014-01-01

    The aim of the study was to use the appraisal model of stress to compare hemodialysis (HD) and continuous peritoneal dialysis (CAPD) patients with special focus on the perception of end-stage renal disease and subsequent emotional profile and health related quality of life (HQoL) in. We hypothesize that different circumstances related to both modes of therapies will result in dissimilar perception of chronic illness with subsequent changes in emotional profile and heath related quality of life. The total of 88 patients with end stage renal disease (ESRD) enrolled in hemodialysis (n=52; HD) or continuous peritoneal dialysis (n=36; CAPD) were given a battery of psychological tests: The Profile of Mood States, The Nottingham Health Profile, The Stress Situation Assessment Questionnaire, The Social Appreciation Questionnaire and The Situation and Trait and Anxiety Inventory. All patients perceived ESRD in terms of a loss and a threat. Moreover, CAPD patients evaluated ESRD as a challenge. Despite different perception of ESRD no significant difference in the level of fear, anxiety or emotional profile was found. Both HD and CAPD patient were reported more fatigue/inertia and confusion/bewilderment than control groups. The main health related complaints were similar in both ESRD patients with major complaints of sleeping disturbances, motor limitations and lack of energy. From the psychological point of view, CAPD treatment seems more like challenge to the enrolled patient which is positive outcome. Despite different appraisal of stress mood and health related complaints were similar in both groups. This may be a result of optimal regulation of cognitive perception of the stress depending on the circumstances of therapy.

  13. 少数民族高考加分的“动态等值模式”探析%A Dynamic Equivalence Model for Favorable Scoring Policy of Ethnic Minority Examinees in Gaokao

    Institute of Scientific and Technical Information of China (English)

    文东茅

    2014-01-01

    我国各地在高校招生中对少数民族学生的加分政策在加分依据、加分对象、加分分值等方面差异悬殊,但都可以归为“静态等额加分模式”。文章在全面梳理各地现行政策的基础上,通过对案例省份现行少数民族加分政策效果的分析,提出“动态等值加分模式”及其具体操作方案。数据模拟结果显示,新的加分模式在科学性、灵活性和公平性方面均具有明显的优势。%There are obvious regional differences in many respects in favorable scoring policies of ethnic minority examinees in Gaokao; however, all the policies can be classified as a static invariant model. On the basis of reviewing the current favorable scoring policies in different provinces, a dynamic equivalence model and its operation scheme have been put forward after a case study analyzing the effects of the favorable scoring policy. According to the simulation results, this new dynamic model has significant superiority in terms of rationality, flexibility and social justice.

  14. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available was performed, using as a springboard seven models of cyber- attack, and resulted in the development of what is described as a canonical model. Keywords: Offensive cyber operations; Process models; Rational reconstructions; Canonical models; Structured...

  15. Familial risk factors favoring drug addiction onset.

    Science.gov (United States)

    Zimić, Jadranka Ivandić; Jukić, Vlado

    2012-01-01

    This study, primarily aimed at identification of familial risk factors favoring drug addiction onset, was carried out throughout 2008 and 2009. The study comprised a total of 146 addicts and 134 control subjects. Based on the study outcome, it can be concluded that in the families the addicts were born into, familial risk factors capable of influencing their psychosocial development and favoring drug addiction onset had been statistically more frequently encountered during childhood and adolescence as compared to the controls. The results also indicated the need for further research into familial interrelations and the structure of the families addicts were born into, as well as the need for the implementation of family-based approaches to both drug addiction prevention and therapy.

  16. Intrinsic up-regulation of 2-AG favors an area specific neuronal survival in different in vitro models of neuronal damage.

    Directory of Open Access Journals (Sweden)

    Sonja Kallendrusch

    Full Text Available BACKGROUND: The endocannabinoid 2-arachidonoyl glycerol (2-AG acts as a retrograde messenger and modulates synaptic signaling e. g. in the hippocampus. 2-AG also exerts neuroprotective effects under pathological situations. To better understand the mechanism beyond physiological signaling we used Organotypic Entorhino-Hippocampal Slice Cultures (OHSC and investigated the temporal regulation of 2-AG in different cell subsets during excitotoxic lesion and dendritic lesion of long range projections in the enthorhinal cortex (EC, dentate gyrus (DG and the cornu ammonis region 1 (CA1. RESULTS: 2-AG levels were elevated 24 h after excitotoxic lesion in CA1 and DG (but not EC and 24 h after perforant pathway transection (PPT in the DG only. After PPT diacylglycerol lipase alpha (DAGL protein, the synthesizing enzyme of 2-AG was decreased when Dagl mRNA expression and 2-AG levels were enhanced. In contrast to DAGL, the 2-AG hydrolyzing enzyme monoacylglycerol lipase (MAGL showed no alterations in total protein and mRNA expression after PPT in OHSC. MAGL immunoreaction underwent a redistribution after PPT and excitotoxic lesion since MAGL IR disappeared in astrocytes of lesioned OHSC. DAGL and MAGL immunoreactions were not detectable in microglia at all investigated time points. Thus, induction of the neuroprotective endocannabinoid 2-AG might be generally accomplished by down-regulation of MAGL in astrocytes after neuronal lesions. CONCLUSION: Increase in 2-AG levels during secondary neuronal damage reflects a general neuroprotective mechanism since it occurred independently in both different lesion models. This intrinsic up-regulation of 2-AG is synergistically controlled by DAGL and MAGL in neurons and astrocytes and thus represents a protective system for neurons that is involved in dendritic reorganisation.

  17. Atmosphere of Mars - Mariner IV models compared.

    Science.gov (United States)

    Eshleman, V. R.; Fjeldbo, G.; Fjeldbo, W. C.

    1966-01-01

    Mariner IV models of three Mars atmospheric layers analogous to terrestrial E, F-1 and F-2 layers, considering relative mass densities, temperatures, carbon dioxide photodissociation and ionization profile

  18. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  19. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available (CSIR), Pretoria, South Africa tj.grant@nlda.nl iburke@csir.co.za rvhheerden@csir.co.za Abstract: Cyber operations denote the response of governments and organisations to cyber crime, terrorism, and warfare. To date, cyber operations have been.... This could include responding to an (impending) attack by counter-attacking or by proactively neutralizing the source of an impending attack. A good starting point to improving understanding would be to model the offensive cyber operations process...

  20. Comparative molecular modelling of biologically active sterols

    Science.gov (United States)

    Baran, Mariusz; Mazerski, Jan

    2015-04-01

    Membrane sterols are targets for a clinically important antifungal agent - amphotericin B. The relatively specific antifungal action of the drug is based on a stronger interaction of amphotericin B with fungal ergosterol than with mammalian cholesterol. Conformational space occupied by six sterols has been defined using the molecular dynamics method to establish if the conformational features correspond to the preferential interaction of amphotericin B with ergosterol as compared with cholesterol. The compounds studied were chosen on the basis of structural features characteristic for cholesterol and ergosterol and on available experimental data on the ability to form complexes with the antibiotic. Statistical analysis of the data obtained has been performed. The results show similarity of the conformational spaces occupied by all the sterols tested. This suggests that the conformational differences of sterol molecules are not the major feature responsible for the differential sterol - drug affinity.

  1. GRIZZLY/FAVOR Interface Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, Terry L [ORNL; Williams, Paul T [ORNL; Yin, Shengjun [ORNL; Klasky, Hilda B [ORNL; Tadinada, Sashi [ORNL; Bass, Bennett Richard [ORNL

    2013-06-01

    As part of the Light Water Reactor Sustainability (LWRS) Program, the objective of the GRIZZLY/FAVOR Interface project is to create the capability to apply GRIZZLY 3-D finite element (thermal and stress) analysis results as input to FAVOR probabilistic fracture mechanics (PFM) analyses. The one benefit of FAVOR to Grizzly is the PROBABILISTIC capability. This document describes the implementation of the GRIZZLY/FAVOR Interface, the preliminary verification and tests results and a user guide that provides detailed step-by-step instructions to run the program.

  2. Comparing Poisson Sigma Model with A-model

    Science.gov (United States)

    Bonechi, F.; Cattaneo, A. S.; Iraso, R.

    2016-10-01

    We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [4], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge fixed PSM action.

  3. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  4. Comparing SVARs and SEMs : Two models of the UK economy

    NARCIS (Netherlands)

    Jacobs, J.P.A.M.; Wallis, K.F.

    2005-01-01

    The structural vector autoregression (SVAR) and simultaneous equation macroeconometric model (SEM) styles of empirical macroeconomic modelling are compared and contrasted, with reference to two models of the UK economy, namely the long-run structural VAR model of Garratt, Lee, Pesaran and Shin and t

  5. Evolution and physics in comparative protein structure modeling.

    Science.gov (United States)

    Fiser, András; Feig, Michael; Brooks, Charles L; Sali, Andrej

    2002-06-01

    From a physical perspective, the native structure of a protein is a consequence of physical forces acting on the protein and solvent atoms during the folding process. From a biological perspective, the native structure of proteins is a result of evolution over millions of years. Correspondingly, there are two types of protein structure prediction methods, de novo prediction and comparative modeling. We review comparative protein structure modeling and discuss the incorporation of physical considerations into the modeling process. A good starting point for achieving this aim is provided by comparative modeling by satisfaction of spatial restraints. Incorporation of physical considerations is illustrated by an inclusion of solvation effects into the modeling of loops.

  6. Comparing Poisson Sigma Model with A-model

    CERN Document Server

    Bonechi, Francesco; Iraso, Riccardo

    2016-01-01

    We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [arXiv:0706.3164], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge-fixed PSM action.

  7. Theory-based Practice: Comparing and Contrasting OT Models

    DEFF Research Database (Denmark)

    Nielsen, Kristina Tomra; Berg, Brett

    2012-01-01

    Theory- Based Practice: Comparing and Contrasting OT Models The workshop will present a critical analysis of the major models of occupational therapy, A Model of Human Occupation, Enabling Occupation II, and Occupational Therapy Intervention Process Model. Similarities and differences among...... the models will be discussed, including each model’s limitations and unique contributions to the profession. Workshop format will include short lectures and group discussions....

  8. What controls biological productivity in coastal upwelling systems? Insights from a comparative modeling study

    Science.gov (United States)

    Lachkar, Z.; Gruber, N.

    2011-06-01

    The magnitude of the biological productivity in Eastern Boundary Upwelling Systems (EBUS) is traditionally viewed as directly reflecting the upwelling intensity. Yet, different EBUS show different sensitivities of productivity to upwelling-favorable winds (Carr and Kearns, 2003). Here, using a comparative modeling study of the California Current System (California CS) and Canary Current System (Canary CS), we show how physical and environmental factors, such as light, temperature and cross-shore circulation modulate the response of biological productivity to upwelling strength. To this end, we made a series of eddy-resolving simulations of the California CS and Canary CS using the Regional Ocean Modeling System (ROMS), coupled to a nitrogen based Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) ecosystem model. We find the nutrient content of the euphotic zone to be 20 % smaller in the Canary CS relative to the California CS. Yet, the biological productivity is 50 % smaller in the latter. This is due to: (1) a faster nutrient-replete growth in the Canary CS relative to the California CS, related to a more favorable light and temperature conditions in the Canary CS, and (2) the longer nearshore water residence times in the Canary CS which lead to larger buildup of biomass in the upwelling zone, thereby enhancing the productivity. The longer residence times in the Canary CS appear to be associated with the wider continental shelves and the lower eddy activity characterizing this upwelling system. This results in a weaker offshore export of nutrients and organic matter, thereby increasing local nutrient recycling and enhancing the coupling between new and export production in the Northwest African system. Our results suggest that climate change induced perturbations such as upwelling favorable wind intensification might lead to contrasting biological responses in the California CS and the Canary CS, with major implications for the biogeochemical cycles and fisheries

  9. What controls biological productivity in coastal upwelling systems? Insights from a comparative modeling study

    Directory of Open Access Journals (Sweden)

    Z. Lachkar

    2011-06-01

    Full Text Available The magnitude of the biological productivity in Eastern Boundary Upwelling Systems (EBUS is traditionally viewed as directly reflecting the upwelling intensity. Yet, different EBUS show different sensitivities of productivity to upwelling-favorable winds (Carr and Kearns, 2003. Here, using a comparative modeling study of the California Current System (California CS and Canary Current System (Canary CS, we show how physical and environmental factors, such as light, temperature and cross-shore circulation modulate the response of biological productivity to upwelling strength. To this end, we made a series of eddy-resolving simulations of the California CS and Canary CS using the Regional Ocean Modeling System (ROMS, coupled to a nitrogen based Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD ecosystem model. We find the nutrient content of the euphotic zone to be 20 % smaller in the Canary CS relative to the California CS. Yet, the biological productivity is 50 % smaller in the latter. This is due to: (1 a faster nutrient-replete growth in the Canary CS relative to the California CS, related to a more favorable light and temperature conditions in the Canary CS, and (2 the longer nearshore water residence times in the Canary CS which lead to larger buildup of biomass in the upwelling zone, thereby enhancing the productivity. The longer residence times in the Canary CS appear to be associated with the wider continental shelves and the lower eddy activity characterizing this upwelling system. This results in a weaker offshore export of nutrients and organic matter, thereby increasing local nutrient recycling and enhancing the coupling between new and export production in the Northwest African system. Our results suggest that climate change induced perturbations such as upwelling favorable wind intensification might lead to contrasting biological responses in the California CS and the Canary CS, with major implications for the biogeochemical cycles

  10. Parmodel: a web server for automated comparative modeling of proteins.

    Science.gov (United States)

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  11. Comparing Structural Brain Connectivity by the Infinite Relational Model

    DEFF Research Database (Denmark)

    Ambrosen, Karen Marie Sandø; Herlau, Tue; Dyrby, Tim;

    2013-01-01

    The growing focus in neuroimaging on analyzing brain connectivity calls for powerful and reliable statistical modeling tools. We examine the Infinite Relational Model (IRM) as a tool to identify and compare structure in brain connectivity graphs by contrasting its performance on graphs from...... modeling tool for the identification of structure and quantification of similarity in graphs of brain connectivity in general....

  12. Comparative Study of Various SDLC Models on Different Parameters

    Directory of Open Access Journals (Sweden)

    Prateek Sharma

    2015-04-01

    Full Text Available The success of a software development project greatly depends upon which process model is used. This paper emphasizes on the need of using appropriate model as per the application to be developed. In this paper we have done the comparative study of the following software models namely Waterfall, Prototype, RAD (Rapid Application Development Incremental, Spiral, Build and Fix and V-shaped. Our aim is to create reliable and cost effective software and these models provide us a way to develop them. The main objective of this research is to represent different models of software development and make a comparison between them to show the features of each model.

  13. Comparative Performance of Volatility Models for Oil Price

    Directory of Open Access Journals (Sweden)

    Afees A. Salisu

    2012-07-01

    Full Text Available In this paper, we compare the performance of volatility models for oil price using daily returns of WTI. The innovations of this paper are in two folds: (i we analyse the oil price across three sub samples namely period before, during and after the global financial crisis, (ii we also analyse the comparative performance of both symmetric and asymmetric volatility models for the oil price. We find that oil price was most volatile during the global financial crises compared to other sub samples. Based on the appropriate model selection criteria, the asymmetric GARCH models appear superior to the symmetric ones in dealing with oil price volatility. This finding indicates evidence of leverage effects in the oil market and ignoring these effects in oil price modelling will lead to serious biases and misleading results.

  14. Comparing various multi-component global heliosphere models

    CERN Document Server

    Müller, H -R; Heerikhuisen, J; Izmodenov, V V; Scherer, K; Alexashov, D; Fahr, H -J

    2008-01-01

    Modeling of the global heliosphere seeks to investigate the interaction of the solar wind with the partially ionized local interstellar medium. Models that treat neutral hydrogen self-consistently and in great detail, together with the plasma, but that neglect magnetic fields, constitute a sub-category within global heliospheric models. There are several different modeling strategies used for this sub-category in the literature. Differences and commonalities in the modeling results from different strategies are pointed out. Plasma-only models and fully self-consistent models from four research groups, for which the neutral species is modeled with either one, three, or four fluids, or else kinetically, are run with the same boundary parameters and equations. They are compared to each other with respect to the locations of key heliospheric boundary locations and with respect to the neutral hydrogen content throughout the heliosphere. In many respects, the models' predictions are similar. In particular, the loca...

  15. Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory

    Science.gov (United States)

    Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup

    2016-12-01

    We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.

  16. Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory

    Science.gov (United States)

    Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup

    2017-02-01

    We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.

  17. A Mathematical Model for Comparing Holland's Personality and Environmental Codes.

    Science.gov (United States)

    Kwak, Junkyu Christopher; Pulvino, Charles J.

    1982-01-01

    Presents a mathematical model utilizing three-letter codes of personality patterns determined from the Self Directed Search. This model compares personality types over time or determines relationships between personality types and person-environment interactions. This approach is consistent with Holland's theory yet more comprehensive than one- or…

  18. Comparative Analysis of Uncertainties in Urban Surface Runoff Modelling

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Schaarup-Jensen, Kjeld

    2007-01-01

    In the present paper a comparison between three different surface runoff models, in the numerical urban drainage tool MOUSE, is conducted. Analysing parameter uncertainty, it is shown that the models are very sensitive with regards to the choice of hydrological parameters, when combined overflow...... analysis, further research in improved parameter assessment for surface runoff models is needed....... volumes are compared - especially when the models are uncalibrated. The occurrences of flooding and surcharge are highly dependent on both hydrological and hydrodynamic parameters. Thus, the conclusion of the paper is that if the use of model simulations is to be a reliable tool for drainage system...

  19. Comparative analysis of model assessment in community detection

    CERN Document Server

    Kawamoto, Tatsuro

    2016-01-01

    Bayesian cluster inference with a flexible generative model allows us to detect various types of structures. However, it has problems stemming from computational complexity and difficulties in model assessment. We consider the stochastic block model with restricted hyperparameter space, which is known to correspond to modularity maximization. We show that it not only reduces computational complexity, but is also beneficial for model assessment. Using various criteria, we conduct a comparative analysis of the model assessments, and analyze whether each criterion tends to overfit or underfit. We also show that the learning of hyperparameters leads to qualitative differences in Bethe free energy and cross-validation errors.

  20. Comparative Study of Path Loss Models in Different Environments

    Directory of Open Access Journals (Sweden)

    Tilotma Yadav,

    2011-04-01

    Full Text Available By using propagation path models to estimate the received signal level as a function of distance, it becomes possible to predict the SNR for a mobile communication system. Both theoretical andmeasurement-based propagation models indicate that average received signal power decreases logarithmically with distance. For comparative analysis we use Okumura’s model, Hata model, COST231 Extension to Hata model,ECEC-33 model,SUI model along with the practical data. Most of these models are based on a systematic interpretation of theoretical data service area like urban(Built-up city or largetown crowded with large buildings, suburban (having some obstacles near the mobile radio car, but still not very congested and rural (No obstacles like tall trees or buildings like farm-land, rice field, open fields in INDIA at 900MHz & 1800MHz frequency .

  1. Comparing measured and modeled firn compaction rates in Greenland

    Science.gov (United States)

    Stevens, C.; MacFerrin, M. J.; Waddington, E. D.; Vo, H.; Yoon, M.

    2015-12-01

    Quantifying the mass balance of the Greenland and Antarctic ice sheets using satellite and/or airborne altimetry requires a firn-densification model to correct for firn-air content and transient firn-thickness changes. We have developed the Community Firn Model (CFM) that allows users to run firn-densification physics from a suite of published models. Here, we use the CFM to compare model-predicted firn depth-density profiles and compaction rates with observed profiles and compaction rates collected from a network of in situ strain gauges at eight different sites in Greenland. Additionally, we use regional-climate-model output to force the CFM and compare the depth-density profiles and compaction rates predicted by the different models. Many of the models were developed using a steady-state assumption and were tuned for the dry-snow zone. Our results demonstrate the challenges of using these models to simulate firn density in Greenland's expanding wet firn and percolation zones, and they help quantify the uncertainty in firn-density model predictions. Next-generation firn models are incorporating more physics (e.g. meltwater percolation and grain growth), and field measurements are essential to inform continuing development of these new models.

  2. New Study Says CAI May Favor Introverts.

    Science.gov (United States)

    Hopmeier, George

    1981-01-01

    A personality research study using the Myers-Briggs Type Indicator indicates that computer-assisted instruction programs favor introverts, i.e., those learners who can concentrate on details, memorize facts, and stay with a task until it is completed. (JJD)

  3. Generation Favorable Institutional Configuration Regional Business Environment

    Directory of Open Access Journals (Sweden)

    Natalia Zinovievna Solodilova

    2014-12-01

    Full Text Available This article discusses the theoretical issues of creating an enabling business environment, which is the base platform for the successful development of entrepreneurship in the regions. Provides A definition of a favorable institutional configuration of the regional business environment, which refers to forms of implementing the basic institutions and other regional institutions, taking into account existing regional system of formal and informal interaction between economic actors. States that despite the measures taken, the landscape of the Russian business community in terms of regions, remains uneven, with different indices of investment and business attractiveness, there is differentiation in business conditions in the regions with similar natural and geographical conditions and resource potential, which is primarily determined by , differences in the institutional configuration of the regional business environment and quality of interaction among the business community of the region. Hypothesis about the impossibility of creating a favorable business environment, institutional configurations at the same time in all regions of the country, as well as its limited duration. Conducted theoretical and probabilistic analysis of the parameters of creating an enabling institutional configuration of the business environment in the Russian regions. Grounded approach whereby institutional configuration of regional business environment, may be subject to management and control actions through targeted by the regional authorities can accept the specified (favorable to the business community parameters. The necessity of planning and effective management of a favorable institutional configuration of the business environment by regional authorities to increase the period of its existence.

  4. To Form a Favorable Idea of Chemistry

    Science.gov (United States)

    Heikkinen, Henry W.

    2010-01-01

    "To confess the truth, Mrs. B., I am not disposed to form a very favorable idea of chemistry, nor do I expect to derive much entertainment from it." That 200-year-old statement by Caroline to Mrs. Bryan, her teacher, appeared on the first page of Jane Marcet's pioneering secondary school textbook, "Conversations on Chemistry". It was published 17…

  5. To Form a Favorable Idea of Chemistry

    Science.gov (United States)

    Heikkinen, Henry W.

    2010-01-01

    "To confess the truth, Mrs. B., I am not disposed to form a very favorable idea of chemistry, nor do I expect to derive much entertainment from it." That 200-year-old statement by Caroline to Mrs. Bryan, her teacher, appeared on the first page of Jane Marcet's pioneering secondary school textbook, "Conversations on Chemistry". It was published 17…

  6. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  7. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  8. Comparative Analysis of Dayside Reconnection Models in Global Magnetosphere Simulations

    CERN Document Server

    Komar, C M; Cassak, P A

    2015-01-01

    We test and compare a number of existing models predicting the location of magnetic reconnection at Earth's dayside magnetopause for various solar wind conditions. We employ robust image processing techniques to determine the locations where each model predicts reconnection to occur. The predictions are then compared to the magnetic separators, the magnetic field lines separating different magnetic topologies. The predictions are tested in distinct high-resolution simulations with interplanetary magnetic field (IMF) clock angles ranging from 30 to 165 degrees in global magnetohydrodynamic simulations using the three-dimensional Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code with a uniform resistivity, although the described techniques can be generally applied to any self-consistent magnetosphere code. Additional simulations are carried out to test location model dependence on IMF strength and dipole tilt. We find that most of the models match large portions of the magnetic separators wh...

  9. Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes

    Science.gov (United States)

    Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.

    2015-12-01

    Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.

  10. MHD models compared with Artemis observations at -60 Re

    Science.gov (United States)

    Gencturk Akay, Iklim; Sibeck, David; Angelopoulos, Vassilis; Kaymaz, Zerefsan; Kuznetsova, Maria

    2016-07-01

    The distant magnetotail has been one of the least studied magnetic regions of the Earth's magnetosphere compared to the other near Earth both dayside and nightside magnetospheric regions owing to the limited number of spacecraft observations. Since 2011, ARTEMIS spacecraft give an excellent opportunity to study the magnetotail at lunar distances in terms of data quality and parameter space. This also gives opportunities to improve the magnetotail models at -60 Re and encourages the modelling studies of the distant magnetotail. Using ARTEMIS data in distant magnetotail, we create magnetic field and plasma flow vector maps in different planes and separated with IMF orientation to understand the magnetotail dynamics at this distance. For this study, we use CCMC's Run-on-Request resources of the MHD models; specifically SWMF-BATS-R-US, OpenGGCM, and LFM and perform the similar analysis with the models. Our main purpose in this study is to measure the performance of the MHD models at -60 Re distant magnetotail by comparing the model results with Artemis observations. In the literature, such a comprehensive comparative study is lacking in the distant tail. Preliminary results show that in general all three models underestimate the magnetic field structure while overestimating the flow speed. In the cross-sectional view, LFM seems to produce the better agreement with the observations. A clear dipolar magnetic field structure is seen with dawn-dusk asymmetry in all models owing to slight positive IMF By but the effect was found to be exaggerated. All models show tailward flows at this distance of the magnetotail, most possibly owing to the magnetic reconnection at the near Earth tail distances. A detailed comparison of several tail characteristics from the models will be presented and discussions will be given with respect to the observations from Artemis at this distance.

  11. A comparative analysis of multi-output frontier models

    Institute of Scientific and Technical Information of China (English)

    Tao ZHANG; Eoghan GARVEY

    2008-01-01

    Recently, there have been more debates on the methods of measuring efficiency. The main objective of this paper is to make a sensitivity analysis for different frontier models and compare the results obtained from the different methods of estimating multi-output frontier for a specific application. The methods include stochastic distance function frontier, stochastic ray frontier,and data envelopment analysis. The stochastic frontier regressions with and without the inefficiency effects model are also com-pared and tested. The results indicate that there are significant correlations between the results obtained from the alternative estimation methods.

  12. What controls biological production in coastal upwelling systems? Insights from a comparative modeling study

    Directory of Open Access Journals (Sweden)

    Z. Lachkar

    2011-10-01

    Full Text Available The magnitude of net primary production (NPP in Eastern Boundary Upwelling Systems (EBUS is traditionally viewed as directly reflecting the wind-driven upwelling intensity. Yet, different EBUS show different sensitivities of NPP to upwelling-favorable winds (Carr and Kearns, 2003. Here, using a comparative modeling study of the California Current System (California CS and Canary Current System (Canary CS, we show how physical and environmental factors, such as light, temperature and cross-shore circulation modulate the response of NPP to upwelling strength. To this end, we made a series of eddy-resolving simulations of the two upwelling systems using the Regional Oceanic Modeling System (ROMS, coupled to a nitrogen-based Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD ecosystem model. Using identical ecological/biogeochemical parameters, our coupled model simulates a level of NPP in the California CS that is 50 % smaller than that in the Canary CS, in agreement with observationally based estimates. We find this much lower NPP in the California CS despite phytoplankton in this system having nearly 20 % higher nutrient concentrations available to fuel their growth. This conundrum can be explained by: (1 phytoplankton having a faster nutrient-replete growth in the Canary CS relative to the California CS; a consequence of more favorable light and temperature conditions in the Canary CS, and (2 the longer nearshore water residence times in the Canary CS, which permit a larger buildup of biomass in the upwelling zone, thereby enhancing NPP. The longer residence times in the Canary CS appear to be a result of the wider continental shelves and the lower mesoscale activity characterizing this upwelling system. This results in a weaker offshore export of nutrients and organic matter, thereby increasing local nutrient recycling and reducing the spatial decoupling between new and export production in the Canary CS. Our results suggest that climate change

  13. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  14. Conceptualizations of Creativity: Comparing Theories and Models of Giftedness

    Science.gov (United States)

    Miller, Angie L.

    2012-01-01

    This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…

  15. Conceptualizations of Creativity: Comparing Theories and Models of Giftedness

    Science.gov (United States)

    Miller, Angie L.

    2012-01-01

    This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…

  16. Theoretical biology: Comparing models of species abundance - Brief Communications Arising

    NARCIS (Netherlands)

    Chave, J.; Alonso, D.; Etienne, R.S.

    2006-01-01

    Ecologists are struggling to explain how so many tropical tree species can coexist in tropical forests, and several empirical studies have demonstrated that negative density dependence is an important mechanism of tree-species coexistence1, 2. Volkov et al.3 compare a model incorporating negative de

  17. Comparing satellite SAR and wind farm wake models

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Vincent, P.; Husson, R.

    2015-01-01

    The aim of the paper is to present offshore wind farm wake observed from satellite Synthetic Aperture Radar (SAR) wind fields from RADARSAT-1/-2 and Envisat and to compare these wakes qualitatively to wind farm wake model results. From some satellite SAR wind maps very long wakes are observed. Th...

  18. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  19. Coevolution of robustness, epistasis, and recombination favors asexual reproduction.

    Science.gov (United States)

    MacCarthy, Thomas; Bergman, Aviv

    2007-07-31

    The prevalence of sexual reproduction remains one of the most perplexing phenomena in evolutionary biology. The deterministic mutation hypothesis postulates that sexual reproduction will be advantageous under synergistic epistasis, a condition in which mutations cause a greater reduction in fitness when combined than would be expected from their individual effects. The inverse condition, antagonistic epistasis, correspondingly is predicted to favor asexual reproduction. To assess this hypothesis, we introduce a finite population evolutionary process that combines a recombination modifier formalism with a gene-regulatory network model. We demonstrate that when reproductive mode and epistasis are allowed to coevolve, asexual reproduction outcompetes sexual reproduction. In addition, no correlation is found between the level of synergistic epistasis and the fixation time of the asexual mode. However, a significant correlation is found between the level of antagonistic epistasis and asexual mode fixation time. This asymmetry can be explained by the greater reduction in fitness imposed by sexual reproduction as compared with asexual reproduction. Our findings present evidence and suggest plausible explanations that challenge both the deterministic mutation hypothesis and recent arguments asserting the importance of emergent synergistic epistasis in the maintenance of sexual reproduction.

  20. Comparative 3-D Modeling of tmRNA

    Directory of Open Access Journals (Sweden)

    Wower Iwona

    2005-06-01

    Full Text Available Abstract Background Trans-translation releases stalled ribosomes from truncated mRNAs and tags defective proteins for proteolytic degradation using transfer-messenger RNA (tmRNA. This small stable RNA represents a hybrid of tRNA- and mRNA-like domains connected by a variable number of pseudoknots. Comparative sequence analysis of tmRNAs found in bacteria, plastids, and mitochondria provides considerable insights into their secondary structures. Progress toward understanding the molecular mechanism of template switching, which constitutes an essential step in trans-translation, is hampered by our limited knowledge about the three-dimensional folding of tmRNA. Results To facilitate experimental testing of the molecular intricacies of trans-translation, which often require appropriately modified tmRNA derivatives, we developed a procedure for building three-dimensional models of tmRNA. Using comparative sequence analysis, phylogenetically-supported 2-D structures were obtained to serve as input for the program ERNA-3D. Motifs containing loops and turns were extracted from the known structures of other RNAs and used to improve the tmRNA models. Biologically feasible 3-D models for the entire tmRNA molecule could be obtained. The models were characterized by a functionally significant close proximity between the tRNA-like domain and the resume codon. Potential conformational changes which might lead to a more open structure of tmRNA upon binding to the ribosome are discussed. The method, described in detail for the tmRNAs of Escherichia coli, Bacillus anthracis, and Caulobacter crescentus, is applicable to every tmRNA. Conclusion Improved molecular models of biological significance were obtained. These models will guide in the design of experiments and provide a better understanding of trans-translation. The comparative procedure described here for tmRNA is easily adopted for the modeling the members of other RNA families.

  1. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  2. Comparative Analysis of Visco-elastic Models with Variable Parameters

    Directory of Open Access Journals (Sweden)

    Silviu Nastac

    2010-01-01

    Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.

  3. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  4. When and why are reliable organizations favored?

    DEFF Research Database (Denmark)

    Ethiraj, Sendil; Yi, Sangyoon

    in this assertion. Principally, we show that whether reliable organizations are favored depends on the nature of the environment. When environments are complex, reliability is selected out. In more complex environments, variability is more valued by selection forces. Further, we also examine the consequences......In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much...

  5. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    The paper investigates the effect of climate conditions on the accuracy of PV system performance models (physical and interpolation methods) which are used within a monitoring system as a reference for the power produced by a PV system to detect inefficient or faulty operating conditions....... The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... and effectiveness of such approaches, among these factors it is worth mentioning the different percentage of diffuse component of the yearly solar radiation on the global one. The experimental results show the effectiveness of the proposed approach. In order to have the possibility to analyze and compare...

  6. A Statistical Word-Level Translation Model for Comparable Corpora

    Science.gov (United States)

    2000-06-01

    readily available resources such as corpora, thesauri, bilingual and multilingual lexicons and dictionaries. The acquisition of such resources has...could aid in Monolingual Information Retrieval (MIR) by methods of query expansion, and thesauri construction. To date, most of the existing...testing the limits of its performance. Future directions include testing the model with a monolingual comparable corpus, e.g. WSJ [42M] and either IACA/B

  7. 丹参素改善子痫前期模型小鼠仔代发育的实验评价%Trail Assessment of Favorable Effects of Danshensu on Offspring Growth in Preeclampsia Mice Model

    Institute of Scientific and Technical Information of China (English)

    沈杨; 胡娅莉; 张焱; 谈勇

    2011-01-01

    OBJECTIVE To investigate the favorable effects of Danshensu on offspring growth in PS/PC induced pre-eclampsia mice model. METHODS Forty-seven ICR mice were randomly divided into four groups. From days 5.5 to 16. 5 of pregnancy, each group was respectively treated as follows: control group of 12 injected with 100 μL of filtered phosphate-buffered saline into the tail vein every day; preeclampsia model group of 15 injected in the same way with 100 μL of filtered PS/PC vesicle suspension; group treated with low-dose Danshensu of 10 injected with 10 μg/g Danshensu; and group treated with high-dose Danshensu of 10 injected with 30 (μ/g Danshensu. the number of potentially viable fetuses, weight of fetuses and placentas, weight of fetal brains, nose-breech length, ponderal index (PI), cerebral index (CI) and neurons with hematoxylin-eosin(H/E) and toluidine blue-eosin (Nissl's) staining were all evaluated as indices for fetal syndrome. RESULTS We found the following changes: increased fetal body weight and length in every Danshensu treated groups, greater amelioration on maternal body weight, fetal nose-breech length and fetal brain weight in group treated with high-dose Danshensu, better changes on survival fetal number in group treated with low-dose Danshensu, and more corrected brain development in both Danshensu treated groups.CONCLUSION Conclusions: Danshensu has been proven effective in ameliorating the prognosis of fetal syndrome (FGR, fetal death and absorbed fetal) and improving curative effect of cerebral dysgenesis in the preeclampsia mice model. Moreover, it is suggested the high-dose Danshensu is moreeffective to relieving FGR in general.%目的 结合PS/PC诱导的子痫前期样小鼠模型,探讨丹参素对子痫前期模型小鼠仔代发育的改善作用.方法 随机将47只妊娠ICR鼠分为4组,模型组(12只)于妊娠第5.5天(gd 5.5)起,尾静脉给0.1 mL PS/PC至gd 16.5;对照组(15只)于gd5.5起,尾静脉给生理盐水至gd 16.5

  8. A new thermal comfort approach comparing adaptive and PMV models

    Energy Technology Data Exchange (ETDEWEB)

    Orosa, Jose A. [Universidade da Coruna, Departamento de Energia y P. M. Paseo de Ronda, n :51, 15011. A Coruna (Spain); Oliveira, Armando C. [Universidade do Porto, Faculdade de Engenharia, New Energy Tec. Unit. Rua Dr Roberto Frias, 4200-465 Porto (Portugal)

    2011-03-15

    In buildings with heating, ventilation, and air-conditioning (HVAC), the Predicted Mean Vote index (PMV) was successful at predicting comfort conditions, whereas in naturally ventilated buildings, only adaptive models provide accurate predictions. On the other hand, permeable coverings can be considered as a passive control method of indoor conditions and, consequently, have implications in the perception of indoor air quality, local thermal comfort, and energy savings. These energy savings were measured in terms of the set point temperature established in accordance with adaptive methods. Problems appear when the adaptive model suggests the same neutral temperature for ambiences with the same indoor temperature but different relative humidities. In this paper, a new design of the PMV model is described to compare the neutral temperature to real indoor conditions. Results showed that this new PMV model tends to overestimate thermal neutralities but with a lower value than Fanger's PMV index. On the other hand, this new PMV model considers indoor relative humidity, showing a clear differentiation of indoor ambiences in terms of it, unlike adaptive models. Finally, spaces with permeable coverings present indoor conditions closer to thermal neutrality, with corresponding energy savings. (author)

  9. Comparing the ecological relevance of four wave exposure models

    Science.gov (United States)

    Sundblad, G.; Bekkby, T.; Isæus, M.; Nikolopoulos, A.; Norderhaug, K. M.; Rinde, E.

    2014-03-01

    Wave exposure is one of the main structuring forces in the marine environment. Methods that enable large scale quantification of environmental variables have become increasingly important for predicting marine communities in the context of spatial planning and coastal zone management. Existing methods range from cartographic solutions to numerical hydrodynamic simulations, and differ in the scale and spatial coverage of their outputs. Using a biological exposure index we compared the performance of four wave exposure models ranging from simple to more advanced techniques. All models were found to be related to the biological exposure index and their performance, measured as bootstrapped R2 distributions, overlapped. Qualitatively, there were differences in the spatial patterns indicating higher complexity with more advanced techniques. In order to create complex spatial patterns wave exposure models should include diffraction, especially in coastal areas rich in islands. The inclusion of wind strength and frequency, in addition to wind direction and bathymetry, further tended to increase the amount of explained variation. The large potential of high-resolution numerical models to explain the observed patterns of species distribution in complex coastal areas provide exciting opportunities for future research. Easy access to relevant wave exposure models will aid large scale habitat classification systems and the continuously growing field of marine species distribution modelling, ultimately serving marine spatial management and planning.

  10. Comparing Two Strategies to Model Uncertainties in Structural Dynamics

    Directory of Open Access Journals (Sweden)

    Rubens Sampaio

    2010-01-01

    Full Text Available In the modeling of dynamical systems, uncertainties are present and they must be taken into account to improve the prediction of the models. Some strategies have been used to model uncertainties and the aim of this work is to discuss two of those strategies and to compare them. This will be done using the simplest model possible: a two d.o.f. (degrees of freedom dynamical system. A simple system is used because it is very helpful to assure a better understanding and, consequently, comparison of the strategies. The first strategy (called parametric strategy consists in taking each spring stiffness as uncertain and a random variable is associated to each one of them. The second strategy (called nonparametric strategy is more general and considers the whole stiffness matrix as uncertain, and associates a random matrix to it. In both cases, the probability density functions either of the random parameters or of the random matrix are deduced from the Maximum Entropy Principle using only the available information. With this example, some important results can be discussed, which cannot be assessed when complex structures are used, as it has been done so far in the literature. One important element for the comparison of the two strategies is the analysis of the samples spaces and the how to compare them.

  11. Comparative analysis of existing models for power-grid synchronization

    CERN Document Server

    Nishikawa, Takashi

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks -- a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. U...

  12. Comparative analysis of existing models for power-grid synchronization

    Science.gov (United States)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  13. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  14. COMPARATIVE STUDY ON MAIN SOLVENCY ASSESSMENT MODELS FOR INSURANCE FIELD

    Directory of Open Access Journals (Sweden)

    Daniela Nicoleta SAHLIAN

    2015-07-01

    Full Text Available During the recent financial crisis of insurance domain, there were imposed new aspects that have to be taken into account concerning the risks management and surveillance activity. The insurance societies could develop internal models in order to determine the minimum capital requirement imposed by the new regulations that are to be adopted on 1 January 2016. In this respect, the purpose of this research paper is to offer a real presentation and comparing with the main solvency regulation systems used worldwide, the accent being on their common characteristics and current tendencies. Thereby, we would like to offer a better understanding of the similarities and differences between the existent solvency regimes in order to develop the best regime of solvency for Romania within the Solvency II project. The study will show that there are clear differences between the existent Solvency I regime and the new approaches based on risk and will also point out the fact that even the key principles supporting the new solvency regimes are convergent, there are a lot of approaches for the application of these principles. In this context, the question we would try to find the answer is "how could the global solvency models be useful for the financial surveillance authority of Romania for the implementation of general model and for the development of internal solvency models according to the requirements of Solvency II" and "which would be the requirements for the implementation of this type of approach?". This thing makes the analysis of solvency models an interesting exercise.

  15. Comparative Study of MHD Modeling of the Background Solar Wind

    CERN Document Server

    Gressl, C; Temmer, M; Odstrcil, D; Linker, J A; Mikic, Z; Riley, P

    2013-01-01

    Knowledge about the background solar wind plays a crucial role in the framework of space weather forecasting. In-situ measurements of the background solar wind are only available for a few points in the heliosphere where spacecraft are located, therefore we have to rely on heliospheric models to derive the distribution of solar wind parameters in interplanetary space. We test the performance of different solar wind models, namely Magnetohydrodynamic Algorithm outside a Sphere/ENLIL (MAS/ENLIL), Wang-Sheeley-Arge/ENLIL (WSA/ENLIL), and MAS/MAS, by comparing model results with in-situ measurements from spacecraft located at 1 AU distance to the Sun (ACE, Wind). To exclude the influence of interplanetary coronal mass ejections (ICMEs), we chose the year 2007 as a time period with low solar activity for our comparison. We found that the general structure of the background solar wind is well reproduced by all models. The best model results were obtained for the parameter solar wind speed. However, the predicted ar...

  16. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  17. Comparative Modelling of the Spectra of Cool Giants

    Science.gov (United States)

    Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.; hide

    2012-01-01

    Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.

  18. A New Method of Comparing Forcing Agents in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.; Jarvis, Andrew

    2015-10-14

    We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approach for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.

  19. Ionospheric topside models compared with experimental electron density profiles

    Directory of Open Access Journals (Sweden)

    S. M. Radicella

    2005-06-01

    Full Text Available Recently an increasing number of topside electron density profiles has been made available to the scientific community on the Internet. These data are important for ionospheric modeling purposes, since the experimental information on the electron density above the ionosphere maximum of ionization is very scarce. The present work compares NeQuick and IRI models with the topside electron density profiles available in the databases of the ISIS2, IK19 and Cosmos 1809 satellites. Experimental electron content from the F2 peak up to satellite height and electron densities at fixed heights above the peak have been compared under a wide range of different conditions. The analysis performed points out the behavior of the models and the improvements needed to be assessed to have a better reproduction of the experimental results. NeQuick topside is a modified Epstein layer, with thickness parameter determined by an empirical relation. It appears that its performance is strongly affected by this parameter, indicating the need for improvements of its formulation. IRI topside is based on Booker's approach to consider two parts with constant height gradients. It appears that this formulation leads to an overestimation of the electron density in the upper part of the profiles, and overestimation of TEC.

  20. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  1. A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print

    Science.gov (United States)

    Knowlton, Steven A.

    2016-01-01

    Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…

  2. Comparative dynamic analysis of the full Grossman model.

    Science.gov (United States)

    Ried, W

    1998-08-01

    The paper applies the method of comparative dynamic analysis to the full Grossman model. For a particular class of solutions, it derives the equations implicitly defining the complete trajectories of the endogenous variables. Relying on the concept of Frisch decision functions, the impact of any parametric change on an endogenous variable can be decomposed into a direct and an indirect effect. The focus of the paper is on marginal changes in the rate of health capital depreciation. It also analyses the impact of either initial financial wealth or the initial stock of health capital. While the direction of most effects remains ambiguous in the full model, the assumption of a zero consumption benefit of health is sufficient to obtain a definite for any direct or indirect effect.

  3. Observations favor the crossing of phantom divide lines

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Using three different parameterized dark energy models, we reconstruct the properties of dark energy from the latest 397 Sne Ia, CMB and BAO with the present matter density, Ωm0, given prior. We find that, when Ωm0 is not small, for example, Ωm0 = 0.28 or 0.32, an evolving dark energy with a crossing of phantom divide line is favored and this conclusion seems to be model independent. We also find that the evolving properties of dark energy become more and more evident with the increase of Ωm0 given prior.

  4. Comparing theoretical models of our galaxy with observations

    Directory of Open Access Journals (Sweden)

    Johnston K.V.

    2012-02-01

    Full Text Available With the advent of large scale observational surveys to map out the stars in our galaxy, there is a need for an efficient tool to compare theoretical models of our galaxy with observations. To this end, we describe here the code Galaxia, which uses efficient and fast algorithms for creating a synthetic survey of the Milky Way, and discuss its uses. Given one or more observational constraints like the color-magnitude bounds, a survey size and geometry, Galaxia returns a catalog of stars in accordance with a given theoretical model of the Milky Way. Both analytic and N-body models can be sampled by Galaxia. For N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. In future, we plan to release the code publicly at http://galaxia.sourceforge.net. As an application of the code, we study the prospect of identifying structures in the stellar halo with future surveys that will have velocity information about the stars.

  5. Modeling and comparative study of fluid velocities in heterogeneous rocks

    Science.gov (United States)

    Hingerl, Ferdinand F.; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally

    2013-04-01

    Detailed knowledge of the distribution of effective porosity and fluid velocities in heterogeneous rock samples is crucial for understanding and predicting spatially resolved fluid residence times and kinetic reaction rates of fluid-rock interactions. The applicability of conventional MRI techniques to sedimentary rocks is limited by internal magnetic field gradients and short spin relaxation times. The approach developed at the UNB MRI Centre combines the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme and three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE). These methods were designed to reduce the errors due to effects of background gradients and fast transverse relaxation. SPRITE is largely immune to time-evolution effects resulting from background gradients, paramagnetic impurities and chemical shift. Using these techniques quantitative 3D porosity maps as well as single-phase fluid velocity fields in sandstone core samples were measured. Using a new Magnetic Resonance Imaging technique developed at the MRI Centre at UNB, we created 3D maps of porosity distributions as well as single-phase fluid velocity distributions of sandstone rock samples. Then, we evaluated the applicability of the Kozeny-Carman relationship for modeling measured fluid velocity distributions in sandstones samples showing meso-scale heterogeneities using two different modeling approaches. The MRI maps were used as reference points for the modeling approaches. For the first modeling approach, we applied the Kozeny-Carman relationship to the porosity distributions and computed respective permeability maps, which in turn provided input for a CFD simulation - using the Stanford CFD code GPRS - to compute averaged velocity maps. The latter were then compared to the measured velocity maps. For the second approach, the measured velocity distributions were used as input for inversely computing permeabilities using the GPRS CFD code. The computed

  6. Bioactive, mechanically favorable, and biodegradable copolymer nanocomposites for orthopedic applications.

    Science.gov (United States)

    Victor, Sunita Prem; Muthu, Jayabalan

    2014-06-01

    We report the synthesis of mechanically favorable, bioactive, and biodegradable copolymer nanocomposites for potential bone applications. The nanocomposites consist of in situ polymerized biodegradable copolyester with hydroxyapatite (HA). Biodegradable copolyesters comprise carboxy terminated poly(propylene fumarate) (CT-PPF) and poly(trimethylol propane fumarate co mannitol sebacate) (TF-Co-MS). Raman spectral imaging clearly reveals a uniform homogenous distribution of HA in the copolymer matrix. The mechanical studies reveal that improved mechanical properties formed when crosslinked with methyl methacrylate (MMA) when compared to N-vinyl pyrrolidone (NVP). The SEM micrographs of the copolymer nanocomposites reveal a serrated structure reflecting higher mechanical strength, good dispersion, and good interfacial bonding of HA in the polymer matrix. In vitro degradation of the copolymer crosslinked with MMA is relatively more than that of NVP and the degradation decreases with an increase in the amount of the HA filler. The mechanically favorable and degradable MMA based nanocomposites also have favorable bioactivity, blood compatibility, cytocompatibility and cell adhesion. The present nanocomposite is a more promising material for orthopedic applications.

  7. Comparing models of star formation simulating observed interacting galaxies

    Science.gov (United States)

    Quiroga, L. F.; Muñoz-Cuartas, J. C.; Rodrigues, I.

    2017-07-01

    In this work, we make a comparison between different models of star formation to reproduce observed interacting galaxies. We use observational data to model the evolution of a pair of galaxies undergoing a minor merger. Minor mergers represent situations weakly deviated from the equilibrium configuration but significant changes in star fomation (SF) efficiency can take place, then, minor mergers provide an unique scene to study SF in galaxies in a realistic but yet simple way. Reproducing observed systems also give us the opportunity to compare the results of the simulations with observations, which at the end can be used as probes to characterize the models of SF implemented in the comparison. In this work we compare two different star formation recipes implemented in Gadget3 and GIZMO codes. Both codes share the same numerical background, and differences arise mainly in the star formation recipe they use. We use observations from Pico dos Días and GEMINI telescopes and show how we use observational data of the interacting pair in AM2229-735 to characterize the interacting pair. Later we use this information to simulate the evolution of the system to finally reproduce the observations: Mass distribution, morphology and main features of the merger-induced star formation burst. We show that both methods manage to reproduce roughly the star formation activity. We show, through a careful study, that resolution plays a major role in the reproducibility of the system. In that sense, star formation recipe implemented in GIZMO code has shown a more robust performance. Acknowledgements: This work is supported by Colciencias, Doctorado Nacional - 617 program.

  8. THE FLAT TAX - A COMPARATIVE STUDY OF THE EXISTING MODELS

    Directory of Open Access Journals (Sweden)

    Schiau (Macavei Laura - Liana

    2011-07-01

    Full Text Available In the two last decades the flat tax systems have spread all around the globe from East and Central Europe to Asia and Central America. Many specialists consider this phenomenon a real fiscal revolution, but others see it as a mistake as long as the new systems are just a feint of the true flat tax designed by the famous Stanford University professors Robert Hall and Alvin Rabushka. In this context this paper tries to determine which of the existing flat tax systems resemble the true flat tax model by comparing and contrasting their main characteristics with the features of the model proposed by Hall and Rabushka. The research also underlines the common features and the differences between the existing models. The idea of this kind of study is not really new, others have done it but the comparison was limited to one country. For example Emil Kalchev from New Bulgarian University has asses the Bulgarian income system, by comparing it with the flat tax and concluding that taxation in Bulgaria is not simple, neutral and non-distortive. Our research is based on several case studies and on compare and contrast qualitative and quantitative methods. The study starts form the fiscal design drawn by the two American professors in the book The Flat Tax. Four main characteristics of the flat tax system were chosen in order to build the comparison: fiscal design, simplicity, avoidance of double taxation and uniformity of the tax rates. The jurisdictions chosen for the case study are countries all around the globe with fiscal systems which are considered flat tax systems. The results obtained show that the fiscal design of Hong Kong is the only flat tax model which is built following an economic logic and not a legal sense, being in the same time a simple and transparent system. Others countries as Slovakia, Albania, Macedonia in Central and Eastern Europe fulfill the requirement regarding the uniformity of taxation. Other jurisdictions avoid the double

  9. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan D.; Stieglitz, Marc

    2015-06-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.

  10. Curriculum inventory: Modeling, sharing and comparing medical education programs.

    Science.gov (United States)

    Ellaway, Rachel H; Albright, Susan; Smothers, Valerie; Cameron, Terri; Willett, Timothy

    2014-03-01

    Abstract descriptions of how curricula are structured and run. The American National Standards Institute (ANSI) MedBiquitous Curriculum Inventory Standard provides a technical syntax through which a wide range of different curricula can be expressed and subsequently compared and analyzed. This standard has the potential to shift curriculum mapping and reporting from a somewhat disjointed and institution-specific undertaking to something that is shared among multiple medical schools and across whole medical education systems. Given the current explosion of different models of curricula (time-free, competency-based, socially accountable, distributed, accelerated, etc.), the ability to consider this diversity using a common model has particular value in medical education management and scholarship. This article describes the development and structure of the Curriculum Inventory Standard as a way of standardizing the modeling of different curricula for audit, evaluation and research purposes. It also considers the strengths and limitations of the current standard and the implications for a medical education world in which this level of commonality, precision, and accountability for curricular practice is the norm rather than the exception.

  11. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    Science.gov (United States)

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  12. Comparative and pharmacophore model for deacetylase SIRT1

    Science.gov (United States)

    Huhtiniemi, Tero; Wittekindt, Carsten; Laitinen, Tuomo; Leppänen, Jukka; Salminen, Antero; Poso, Antti; Lahtela-Kakkonen, Maija

    2006-09-01

    Sirtuins are NAD-dependent histone deacetylases, which cleave the acetyl-group from acetylated proteins, such as histones but also the acetyl groups from several transcription factors, and in this way can change their activities. Of all seven mammalian SirTs, the human sirtuin SirT1 has been the most extensively studied. However, there is no crystal structure or comparative model reported for SirT1. We have therefore built up a three-dimensional comparison model of the SirT1 protein catalytic core (domain area from residues 244 to 498 of the full length SirT1) in order to assist in the investigation of active site-ligand interactions and in the design of novel SirT1 inhibitors. In this study we also propose the binding-mode of recently reported set of indole-based inhibitors in SirT1. The site of interaction and the ligand conformation were predicted by the use of molecular docking techniques. To distinguish between active and inactive compounds, a post-docking filter based on H-bond network was constructed. Docking results were used to investigate the pharmacophore and to identify a filter for database mining.

  13. Catalytically favorable surface patterns in Pt-Au nanoclusters

    KAUST Repository

    Mokkath, Junais Habeeb

    2013-01-01

    Motivated by recent experimental demonstrations of novel PtAu nanoparticles with highly enhanced catalytic properties, we present a systematic theoretical study that explores principal catalytic indicators as a function of the particle size and composition. We find that Pt electronic states in the vicinity of the Fermi level combined with a modified electron distribution in the nanoparticle due to Pt-to-Au charge transfer are the origin of the outstanding catalytic properties. From our model we deduce the catalytically favorable surface patterns that induce ensemble and ligand effects. © The Royal Society of Chemistry 2013.

  14. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  15. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan; Stieglitz, Marc

    2015-04-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal

  16. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  17. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem-ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of fers major benefits, such as setting up one’s own business and the pos sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  18. Comparing Entrepreneurship Intention: A Multigroup Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2012-04-01

    Full Text Available Unemployment is one of the main social and economic problems that many countries face nowadays. One strategic way to overcome this problem is by fostering entrepreneurship spirit especially for unem ployment graduates. Entrepreneurship is becoming an alternative Job for students after they graduate. This is because entrepreneurship of-fers major benefits, such as setting up one’s own business and the pos-sibility of having significant financial rewards than working for others. Entrepreneurship is then offered by many universities. This research applies the theory of planned behavior (TPB by incorporating attitude toward success as an antecedent variable of the attitude to examine students’ intention to become an entrepreneur. The objective of this research is to compare entrepreneurship intention between business students and non-business students. A self-administered questionnaire was used to collect data for this study. Questionnaires were distributed to respondents by applying the drop-off/pick-up method. A number of 294 by questionnaires were used in the analysis. Data were analyzed by using structural equation modeling. Two out of four hypotheses were confirmed. These hypotheses are the relationship between the attitude toward becoming an entrepreneur and the intention to try becoming an entrepreneur, and the relationship perceived behavioral control and intention to try becoming an entrepreneur. This paper also provides a discussion and offers directions for future research.

  19. Comparative analysis of marine ecosystems: international production modelling workshop.

    Science.gov (United States)

    Link, Jason S; Megrey, Bernard A; Miller, Thomas J; Essington, Tim; Boldt, Jennifer; Bundy, Alida; Moksness, Erlend; Drinkwater, Ken F; Perry, R Ian

    2010-12-23

    Understanding the drivers that dictate the productivity of marine ecosystems continues to be a globally important issue. A vast literature identifies three main processes that regulate the production dynamics of such ecosystems: biophysical, exploitative and trophodynamic. Exploring the prominence among this 'triad' of drivers, through a synthetic analysis, is critical for understanding how marine ecosystems function and subsequently produce fisheries resources of interest to humans. To explore this topic further, an international workshop was held on 10-14 May 2010, at the National Academy of Science's Jonsson Center in Woods Hole, MA, USA. The workshop compiled the data required to develop production models at different hierarchical levels (e.g. species, guild, ecosystem) for many of the major Northern Hemisphere marine ecosystems that have supported notable fisheries. Analyses focused on comparable total system biomass production, functionally equivalent species production, or simulation studies for 11 different marine fishery ecosystems. Workshop activities also led to new analytical tools. Preliminary results suggested common patterns driving overall fisheries production in these ecosystems, but also highlighted variation in the relative importance of each among ecosystems.

  20. Salinity inversions in the thermocline under upwelling favorable winds

    Science.gov (United States)

    Burchard, Hans; Basdurak, N. Berkay; Gräwe, Ulf; Knoll, Michaela; Mohrholz, Volker; Müller, Selina

    2017-02-01

    This paper discusses and explains the phenomenon of salinity inversions in the thermocline offshore from an upwelling region during upwelling favorable winds. Using the nontidal central Baltic Sea as an easily accessible natural laboratory, high-resolution transect and station observations in the upper layers are analyzed. The data show local salinity minima in the strongly stratified seasonal thermocline during summer conditions under the influence of upwelling favorable wind. A simple analytical box model using parameters (including variation by means of a Monte Carlo method) estimated from a hindcast model for the Baltic Sea is constructed to explain the observations. As a result, upwelled water with high salinity and low temperature is warmed up due to downward surface heat fluxes while it is transported offshore by the Ekman transport. The warming of upwelled surface water allows maintenance of stable stratification despite the destabilizing salinity stratification, such that local salinity minima in the thermocline can be generated. Inspection of published observations from the Benguela, Peruvian, and eastern tropical North Atlantic upwelling systems shows that also there salinity inversions occur in the thermocline, but in these cases thermocline salinity shows local maxima, since upwelled water has a lower salinity than the surface water. It is hypothesized that thermocline salinity inversions should generally occur offshore from upwelling regions whenever winds are steady enough and surface warming is sufficiently strong.

  1. Prehospital Intubation is Associated with Favorable Outcomes and Lower Mortality in ProTECT III.

    Science.gov (United States)

    Denninghoff, Kurt R; Nuño, Tomas; Pauls, Qi; Yeatts, Sharon D; Silbergleit, Robert; Palesch, Yuko Y; Merck, Lisa H; Manley, Geoff T; Wright, David W

    2017-01-01

    Traumatic brain injury (TBI) causes more than 2.5 million emergency department visits, hospitalizations, or deaths annually. Prehospital endotracheal intubation has been associated with poor outcomes in patients with TBI in several retrospective observational studies. We evaluated the relationship between prehospital intubation, functional outcomes, and mortality using high quality data on clinical practice collected prospectively during a randomized multicenter clinical trial. ProTECT III was a multicenter randomized, double-blind, placebo-controlled trial of early administration of progesterone in 882 patients with acute moderate to severe nonpenetrating TBI. Patients were excluded if they had an index GCS of 3 and nonreactive pupils, those with withdrawal of life support on arrival, and if they had documented prolonged hypotension and/or hypoxia. Prehospital intubation was performed as per local clinical protocol in each participating EMS system. Models for favorable outcome and mortality included prehospital intubation, method of transport, index GCS, age, race, and ethnicity as independent variables. Significance was set at α = 0.05. Favorable outcome was defined by a stratified dichotomy of the GOS-E scores in which the definition of favorable outcome depended on the severity of the initial injury. Favorable outcome was more frequent in the 349 subjects with prehospital intubation (57.3%) than in the other 533 patients (46.0%, p = 0.003). Mortality was also lower in the prehospital intubation group (13.8% v. 19.5%, p = 0.03). Logistic regression analysis of prehospital intubation and mortality, adjusted for index GCS, showed that odds of dying for those with prehospital intubation were 47% lower than for those that were not intubated (OR = 0.53, 95% CI = 0.36-0.78). 279 patients with prehospital intubation were transported by air. Modeling transport method and mortality, adjusted for index GCS, showed increased odds of dying in those transported by ground

  2. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  3. Most Americans Favor Larger Health Warnings on Cigarette Packs

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_164398.html Most Americans Favor Larger Health Warnings on Cigarette Packs ... According to the study's first author, Sarah Kowitt, "Most adults, including smokers, have favorable attitudes towards larger ...

  4. Modelling constructed wetlands: scopes and aims - a comparative review

    OpenAIRE

    Meyer, D; Chazarenc, Florent; Claveau Mallet, D.; Dittmer, D; Forquet, N.; Molle, P.; Morvannou, A.; Palfy, T.; Petitjean, A; Rizzo, A.; Samso Campa, R.; Scholz, M.; Soric, Audrey; Langergraber, G.

    2015-01-01

    International audience; During the last two decades a couple of models were developed for constructed wetlands with differing purposes. Meanwhile the usage of this kind of tool is generally accepted, but the misuse of the models still confirms the skepticism. Generally three some groups of models can be distinguished: On one hand mechanistic models try to display the complex and diffuse interaction of occurring processes, on the other hand the same kind of models is are used to investigate si...

  5. Comparative study on mode split discrete choice models

    Institute of Scientific and Technical Information of China (English)

    Xianlong Chen; Xiaoqian Liu; Fazhi Li

    2013-01-01

    Discrete choice model acts as one of the most important tools for studies involving mode split in the context of transport demand forecast. As different types of discrete choice models display their merits and restrictions diversely, how to properly select the specific type among discrete choice models for realistic application still remains to be a tough problem. In this article, five typical discrete choice models for transport mode split are, respectively, discussed, which includes multinomial logit model, nested logit model (NL), heteroscedastic extreme value model, multinominal probit model and mixed multinomial logit model (MMNL). The theoretical basis and application attributes of these five models are especially analysed with great attention, and they are also applied to a realistic intercity case of mode split forecast, which results indi-cating that NL model does well in accommodating simi-larity and heterogeneity across alternatives, while MMNL model serves as the most effective method for mode choice prediction since it shows the highest reliability with the least significant prediction errors and even outperforms the other four models in solving the heterogeneity and similarity problems. This study indicates that conclusions derived from a single discrete choice model are not reli-able, and it is better to choose the proper model based on its characteristics.

  6. Charcoal kiln relicts - a favorable site for tree growth?

    Science.gov (United States)

    Buras, Allan; Hirsch, Florian; van der Maaten, Ernst; Takla, Melanie; Räbiger, Christin; Cruz Garcia, Roberto; Schneider, Anna; Raab, Alexandra; Raab, Thomas; Wilmking, Martin

    2015-04-01

    Soils with incompletely combusted organic material (aka 'black carbon') are considered fertile for plant growth. Considerable enrichment of soils with black carbon is known from Chernozems, from anthropogenic induced altering of soils like the 'Terra Preta' in South America (e.g. Glaser, 2001), and from charcoal kiln relicts. Recent studies have reported a high spatial frequency of charcoal kiln relicts in the Northeastern German lowlands (Raab et al., 2015), which today are often overgrown by forest plantations. In this context the question arises whether these sites are favorable for tree growth. Here we compare the performance of 22 Pinus sylvestris individuals - a commonly used tree species in forestry - growing on charcoal kiln relicts with 22 control trees. Growth performance (height growth and diameter growth) of the trees was determined using dendrochronological techniques, i.e. standard ring-width measurements were undertaken on each two cores per tree and tree height was measured in the field. Several other wood properties such as annual wood density, average resin content, as well as wood chemistry were analyzed. Our results indicate that trees growing on charcoal kiln relicts grow significantly less and have a significantly lower wood density in comparison with control trees. Specific chemical components such as Manganese as well as resin contents were significantly higher in kiln trees. These results highlight that tree growth on charcoal kiln relicts is actually hampered instead of enhanced. Possibly this is a combined effect of differing physical soil properties which alter soil water accessibility for plants and differing chemical soil properties which may negatively affect tree growth either if toxic limits are surpassed or if soil nutrient availability is decreased. Additional soil analyses with respect to soil texture and soil chemistry shall reveal further insight into this hypothesis. Given the frequent distribution of charcoal kiln relicts in

  7. Comparative assessment of three-phase oil relative permeability models

    Science.gov (United States)

    Ranaee, Ehsan; Riva, Monica; Porta, Giovanni M.; Guadagnini, Alberto

    2016-07-01

    We assess the ability of 11 models to reproduce three-phase oil relative permeability (kro) laboratory data obtained in a water-wet sandstone sample. We do so by considering model performance when (i) solely two-phase data are employed to render predictions of kro and (ii) two and three-phase data are jointly used for model calibration. In the latter case, a Maximum Likelihood (ML) approach is used to estimate model parameters. The tested models are selected among (i) classical models routinely employed in practical applications and implemented in commercial reservoir software and (ii) relatively recent models which are considered to allow overcoming some drawbacks of the classical formulations. Among others, the latter set of models includes the formulation recently proposed by Ranaee et al., which has been shown to embed the critical effects of hysteresis, including the reproduction of oil remobilization induced by gas injection in water-wet media. We employ formal model discrimination criteria to rank models according to their skill to reproduce the observed data and use ML Bayesian model averaging to provide model-averaged estimates (and associated uncertainty bounds) of kro by taking advantage of the diverse interpretive abilities of all models analyzed. The occurrence of elliptic regions is also analyzed for selected models in the framework of the classical fractional flow theory of displacement. Our study confirms that model outcomes based on channel flow theory and classical saturation-weighted interpolation models do not generally yield accurate reproduction of kro data, especially in the regime associated with low oil saturations, where water alternating gas injection (WAG) techniques are usually employed for enhanced oil recovery. This negative feature is not observed in the model of Ranaee et al. (2015) due to its ability to embed key effects of pore-scale phase distributions, such as hysteresis effects and cycle dependency, for modeling kro observed

  8. Orthodontic measurements on digital study models compared with plaster models: a systematic review.

    Science.gov (United States)

    Fleming, P S; Marinho, V; Johal, A

    2011-02-01

    The aim of this study is to evaluate the validity of the use of digital models to assess tooth size, arch length, irregularity index, arch width and crowding versus measurements generated on hand-held plaster models with digital callipers in patients with and without malocclusion. Studies comparing linear and angular measurements obtained on digital and standard plaster models were identified by searching multiple databases including MEDLINE, LILACS, BBO, ClinicalTrials.gov, the National Research Register and Pro-Quest Dissertation Abstracts and Thesis database, without restrictions relating to publication status or language of publication. Two authors were involved in study selection, quality assessment and the extraction of data. Items from the Quality Assessment of Studies of Diagnostic Accuracy included in Systematic Reviews checklist were used to assess the methodological quality of included studies. No meta-analysis was conducted. Comparisons between measurements of digital and plaster models made directly within studies were reported, and the difference between the (repeated) measurement means for digital and plaster models were considered as estimates. Seventeen relevant studies were included. Where reported, overall, the absolute mean differences between direct and indirect measurements on plaster and digital models were minor and clinically insignificant. Orthodontic measurements with digital models were comparable to those derived from plaster models. The use of digital models as an alternative to conventional measurement on plaster models may be recommended, although the evidence identified in this review is of variable quality. © 2010 John Wiley & Sons A/S.

  9. 12 CFR 560.110 - Most favored lender usury preemption.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Most favored lender usury preemption. 560.110... INVESTMENT Lending and Investment Provisions Applicable to all Savings Associations § 560.110 Most favored... permits its most favored lender to charge late fees, then a savings association located in that state...

  10. A Comparative Metroscope Model for Urban Information Flows

    Science.gov (United States)

    Fink, J. H.; Shandas, V.; Beaudoin, F.

    2011-12-01

    One of the most promising ways to achieve global sustainability goals of climate stabilization, poverty reduction, and biodiversity preservation is to make the world's cities more efficient, equitable, and healthful. While each city must follow a unique and somewhat idiosyncratic path toward these linked goals based on its history, geography, demography, and politics, movement in this direction can accelerate if cities can learn from each other more effectively. Such learning requires the identification of common characteristics and methodologies. We have created a framework for organizing and applying urban information flows, which we refer to as "Metroscopes." Metroscopes, which are analogous to the large instruments that have advanced the physical and life sciences, integrate six elements: data collection and input; classification through the use of metrics; data storage and retrieval; analytics and modeling; decision support including visualization and scenario generation; and assessment of the effectiveness of policy choices. Standards for each of these elements can be agreed upon by relevant urban science and policy sub-communities, and then can evolve as technologies and practices advance. We are implementing and calibrating this approach using data and relationships from Portland (OR), Phoenix (AZ) and London (UK). Elements that are being integrated include the Global City Indicators Facility at University of Toronto, the J-Earth database system and Decision Theater from Arizona State University, urban mobility analyses performed by the SENSEable City Lab at MIT, and Portland's Ecodistrict approach for urban management. Individual Metroscopes can be compared directly from one city to another, or with larger assemblages of cities like those being classified by ICLEI's STAR program, the Clinton Climate Initiative's C40, and Siemens Green Cities Index. This large-scale integration of urban data sets and approaches and its systematic comparison are key steps

  11. Comparing three models to estimate transpiration of desert shrubs

    Science.gov (United States)

    Xu, Shiqin; Yu, Zhongbo; Ji, Xibin; Sudicky, Edward A.

    2017-07-01

    The role of environmental variables in controlling transpiration (Ec) is an important, but not well-understood, aspect of transpiration modeling in arid desert regions. Taking three dominant desert shrubs, Haloxylon ammodendron, Nitraria tangutorum, and Calligonum mongolicum, as examples, we aim to evaluate the applicability of three transpiration models, i.e. the modified Jarvis-Stewart model (MJS), the simplified process-based model (BTA), and the artificial neural network model (ANN) at different temporal scales. The stem sap flow of each species was monitored using the stem heat balance approach over both the 2014 and 2015 main growing seasons. Concurrent environmental variables were also measured with an automatic weather station. The ANN model generally produced better simulations of Ec than the MJS and BTA models at both hourly and daily scales, indicating its advantage in solving complicated, nonlinear problems between transpiration rate and environmental driving forces. The solar radiation and vapor pressure deficit were crucial variables in modeling Ec for all three species. The performance of the MJS and ANN models was significantly improved by incorporating root-zone soil moisture. We also found that the difference between hourly and daily fitted parameter values was considerable for the MJS and BTA models. Therefore, these models need to be recalibrated when applied at different temporal scales. This study provides insights regarding the application and performance of current transpiration models in arid desert regions, and thus provides a deeper understanding of eco-hydrological processes and sustainable ecosystem management at the study site.

  12. A Model of Comparative Ethics Education for Social Workers

    Science.gov (United States)

    Pugh, Greg L.

    2017-01-01

    Social work ethics education models have not effectively engaged social workers in practice in formal ethical reasoning processes, potentially allowing personal bias to affect ethical decisions. Using two of the primary ethical models from medicine, a new social work ethics model for education and practical application is proposed. The strengths…

  13. A comparative analysis of pricing models for enterprise cloud platforms

    CSIR Research Space (South Africa)

    Mvelase, P

    2013-09-01

    Full Text Available virtual enterprise (VE)-enabled cloud enterprise architecture for small medium and micro enterprises (SMMEs) against EC2 pricing model to prove that our pricing model is more suitable for small medium and micro enterprises (SMMEs). This model is based...

  14. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Directory of Open Access Journals (Sweden)

    Genoveva Rodríguez-Castañeda

    Full Text Available Species distribution modeling (SDM is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1 spatial bias in species' occurrence data, (2 differences in the geographic extent of the environmental data, and (3 the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  15. Predicting the fate of biodiversity using species' distribution models: enhancing model comparability and repeatability.

    Science.gov (United States)

    Rodríguez-Castañeda, Genoveva; Hof, Anouschka R; Jansson, Roland; Harding, Larisa E

    2012-01-01

    Species distribution modeling (SDM) is an increasingly important tool to predict the geographic distribution of species. Even though many problems associated with this method have been highlighted and solutions have been proposed, little has been done to increase comparability among studies. We reviewed recent publications applying SDMs and found that seventy nine percent failed to report methods that ensure comparability among studies, such as disclosing the maximum probability range produced by the models and reporting on the number of species occurrences used. We modeled six species of Falco from northern Europe and demonstrate that model results are altered by (1) spatial bias in species' occurrence data, (2) differences in the geographic extent of the environmental data, and (3) the effects of transformation of model output to presence/absence data when applying thresholds. Depending on the modeling decisions, forecasts of the future geographic distribution of Falco ranged from range contraction in 80% of the species to no net loss in any species, with the best model predicting no net loss of habitat in Northern Europe. The fact that predictions of range changes in response to climate change in published studies may be influenced by decisions in the modeling process seriously hampers the possibility of making sound management recommendations. Thus, each of the decisions made in generating SDMs should be reported and evaluated to ensure conclusions and policies are based on the biology and ecology of the species being modeled.

  16. A comparative study of aerosol deposition in different lung models.

    Science.gov (United States)

    Yu, C P; Diu, C K

    1982-01-01

    Theoretical calculations are made on total and regional deposition of inhaled particles in the human respiratory system based upon various current lung models. It is found that although total deposition does not vary appreciably from model to model, considerably large differences are present in regional deposition. Deposition profiles along the airways from different models also show very different patterns. These differences can be explained in terms of airway dimensions and the number of structures in different models. Extension to explain intersubject variability is also made.

  17. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Science.gov (United States)

    Saidani, Fida; Hutter, Franz X.; Scurtu, Rares-George; Braunwarth, Wolfgang; Burghartz, Joachim N.

    2017-09-01

    In this work, various Lithium-ion (Li-ion) battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  18. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  19. Accuracy of laser-scanned models compared to plaster models and cone-beam computed tomography.

    Science.gov (United States)

    Kim, Jooseong; Heo, Giseon; Lagravère, Manuel O

    2014-05-01

    To compare the accuracy of measurements obtained from the three-dimensional (3D) laser scans to those taken from the cone-beam computed tomography (CBCT) scans and those obtained from plaster models. Eighteen different measurements, encompassing mesiodistal width of teeth and both maxillary and mandibular arch length and width, were selected using various landmarks. CBCT scans and plaster models were prepared from 60 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner, and the selected landmarks were measured using its software. CBCT scans were imported and analyzed using the Avizo software, and the 26 landmarks corresponding to the selected measurements were located and recorded. The plaster models were also measured using a digital caliper. Descriptive statistics and intraclass correlation coefficient (ICC) were used to analyze the data. The ICC result showed that the values obtained by the three different methods were highly correlated in all measurements, all having correlations>0.808. When checking the differences between values and methods, the largest mean difference found was 0.59 mm±0.38 mm. In conclusion, plaster models, CBCT models, and laser-scanned models are three different diagnostic records, each with its own advantages and disadvantages. The present results showed that the laser-scanned models are highly accurate to plaster models and CBCT scans. This gives general clinicians an alternative to take into consideration the advantages of laser-scanned models over plaster models and CBCT reconstructions.

  20. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  1. Comparative Election Forecasting: Further Insights from Synthetic Models

    OpenAIRE

    Michael S. Lewis-Beck; Dassonneville, Ruth

    2015-01-01

    As an enterprise, election forecasting has spread and grown. Initial work began in the 1980s in the United States, eventually travelling to Western Europe, where it finds a current outlet in the most of the region’s democracies. However, that work has been confined to traditional approaches – statistical modeling or poll-watching. We import a new approach, which we call synthetic modeling. These forecasts come from hybrid models blending structural knowledge with contemporary p...

  2. Retail value: A more favorable offer?

    Directory of Open Access Journals (Sweden)

    Iván Alirio Ramírez Rusinque

    2014-06-01

    Full Text Available The most important transformation introduced by Law 1150 of 2007 in public procurement was the determination of goodwill as the absolute best deal for State entities, when concerned with the acquisition or provision of godos and services of uniform technical characteristics and common use. Therefore, an analysis from the point of legal and policy perspective is needed in order to establish whether the reform brought the concept of absolute best deal; or whether the solution was detrimental by affecting the duty of objective selection, the principles of equality, effectiveness, transparency, planning, and financial equilibrium of the contract and the constitutional principle of competition. Is the model of goodwill as a criterion for the absolute best deal legally effective in the field of public procurement?

  3. Mental Models about Seismic Effects: Students' Profile Based Comparative Analysis

    Science.gov (United States)

    Moutinho, Sara; Moura, Rui; Vasconcelos, Clara

    2016-01-01

    Nowadays, meaningful learning takes a central role in science education and is based in mental models that allow the representation of the real world by individuals. Thus, it is essential to analyse the student's mental models by promoting an easier reconstruction of scientific knowledge, by allowing them to become consistent with the curricular…

  4. Criteria for comparing economic impact models of tourism

    NARCIS (Netherlands)

    Klijs, J.; Heijman, W.J.M.; Korteweg Maris, D.; Bryon, J.

    2012-01-01

    There are substantial differences between models of the economic impacts of tourism. Not only do the nature and precision of results vary, but data demands, complexity and underlying assumptions also differ. Often, it is not clear whether the models chosen are appropriate for the specific situation

  5. Criteria for comparing economic impact models of tourism

    NARCIS (Netherlands)

    Klijs, J.; Heijman, W.J.M.; Korteweg Maris, D.; Bryon, J.

    2012-01-01

    There are substantial differences between models of the economic impacts of tourism. Not only do the nature and precision of results vary, but data demands, complexity and underlying assumptions also differ. Often, it is not clear whether the models chosen are appropriate for the specific situation

  6. Competition favors elk over beaver in a riparian willow ecosystem

    Science.gov (United States)

    Baker, B.W.; Peinetti, H.R.; Coughenour, M.C.; Johnson, T.L.

    2012-01-01

    Beaver (Castor spp.) conservation requires an understanding of their complex interactions with competing herbivores. Simulation modeling offers a controlled environment to examine long-term dynamics in ecosystems driven by uncontrollable variables. We used a new version of the SAVANNA ecosystem model to investigate beaver (C. Canadensis) and elk (Cervus elapses) competition for willow (Salix spp.). We initialized the model with field data from Rocky Mountain National Park, Colorado, USA, to simulate a 4-ha riparian ecosystem containing beaver, elk, and willow. We found beaver persisted indefinitely when elk density was or = 30 elk km_2. The loss of tall willow preceded rapid beaver declines, thus willow condition may predict beaver population trajectory in natural environments. Beaver were able to persist with slightly higher elk densities if beaver alternated their use of foraging sites in a rest-rotation pattern rather than maintained continuous use. Thus, we found asymmetrical competition for willow strongly favored elk over beaver in a simulated montane ecosystem. Finally, we discuss application of the SAVANNA model and mechanisms of competition relative to beaver persistence as metapopulations, ecological resistance and alternative state models, and ecosystem regulation.

  7. Structures of Neural Correlation and How They Favor Coding

    Science.gov (United States)

    Franke, Felix; Fiscella, Michele; Sevelev, Maksim; Roska, Botond; Hierlemann, Andreas; da Silveira, Rava Azeredo

    2017-01-01

    Summary The neural representation of information suffers from “noise”—the trial-to-trial variability in the response of neurons. The impact of correlated noise upon population coding has been debated, but a direct connection between theory and experiment remains tenuous. Here, we substantiate this connection and propose a refined theoretical picture. Using simultaneous recordings from a population of direction-selective retinal ganglion cells, we demonstrate that coding benefits from noise correlations. The effect is appreciable already in small populations, yet it is a collective phenomenon. Furthermore, the stimulus-dependent structure of correlation is key. We develop simple functional models that capture the stimulus-dependent statistics. We then use them to quantify the performance of population coding, which depends upon interplays of feature sensitivities and noise correlations in the population. Because favorable structures of correlation emerge robustly in circuits with noisy, nonlinear elements, they will arise and benefit coding beyond the confines of retina. PMID:26796692

  8. How realistic are flat-ramp-flat fault kinematic models? Comparing mechanical and kinematic models

    Science.gov (United States)

    Cruz, L.; Nevitt, J. M.; Hilley, G. E.; Seixas, G.

    2015-12-01

    Rock within the upper crust appears to deform according to elasto-plastic constitutive rules, but structural geologists often employ kinematic descriptions that prescribe particle motions irrespective of these physical properties. In this contribution, we examine the range of constitutive properties that are approximately implied by kinematic models by comparing predicted deformations between mechanical and kinematic models for identical fault geometric configurations. Specifically, we use the ABAQUS finite-element package to model a fault-bend-fold geometry using an elasto-plastic constitutive rule (the elastic component is linear and the plastic failure occurs according to a Mohr-Coulomb failure criterion). We varied physical properties in the mechanical model (i.e., Young's modulus, Poisson ratio, cohesion yield strength, internal friction angle, sliding friction angle) to determine the impact of each on the observed deformations, which were then compared to predictions of kinematic models parameterized with identical geometries. We found that a limited sub-set of physical properties were required to produce deformations that were similar to those predicted by the kinematic models. Specifically, mechanical models with low cohesion are required to allow the kink at the bottom of the flat-ramp geometry to remain stationary over time. Additionally, deformations produced by steep ramp geometries (30 degrees) are difficult to reconcile between the two types of models, while lower slope gradients better conform to the geometric assumptions. These physical properties may fall within the range of those observed in laboratory experiments, suggesting that particle motions predicted by kinematic models may provide an approximate representation of those produced by a physically consistent model under some circumstances.

  9. Lattice Boltzmann modeling of directional wetting: Comparing simulations to experiments

    NARCIS (Netherlands)

    Jansen, H.P.; Sotthewes, K.; Swigchem, van J.; Zandvliet, H.J.W.; Kooij, E.S.

    2013-01-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results,

  10. Comparative model accuracy of a data-fitted generalized Aw-Rascle-Zhang model

    CERN Document Server

    Fan, Shimao; Seibold, Benjamin

    2013-01-01

    The Aw-Rascle-Zhang (ARZ) model can be interpreted as a generalization of the Lighthill-Whitham-Richards (LWR) model, possessing a family of fundamental diagram curves, each of which represents a class of drivers with a different empty road velocity. A weakness of this approach is that different drivers possess vastly different densities at which traffic flow stagnates. This drawback can be overcome by modifying the pressure relation in the ARZ model, leading to the generalized Aw-Rascle-Zhang (GARZ) model. We present an approach to determine the parameter functions of the GARZ model from fundamental diagram measurement data. The predictive accuracy of the resulting data-fitted GARZ model is compared to other traffic models by means of a three-detector test setup, employing two types of data: vehicle trajectory data, and sensor data. This work also considers the extension of the ARZ and the GARZ models to models with a relaxation term, and conducts an investigation of the optimal relaxation time.

  11. The structure of common fears: comparing three different models.

    Science.gov (United States)

    de Jongh, Ad; Oosterink, Floor M D; Kieffer, Jacobien M; Hoogstraten, Johan; Aartman, Irene H A

    2011-01-01

    Previous studies showed discrepant findings regarding the factor structure of common fears. The purpose of the present study was to expand on these findings and contribute to the development of a descriptive framework for a fear classification. Using data from the Dutch general population (n = 961; 50.9% women), an exploratory factor analysis was performed to delineate the multidimensional structure of 11 common fears previously used in a factor analytic study by Fredrikson, Annas, Fischer, and Wik (1996). An independent sample (n = 998; 48.3% women) was used to confirm the newly derived model by means of confirmatory factor analysis. In addition, the model was tested against the DSM-IV-TR model and a model found earlier by Fredrikson et al. (1996). Although support was found for a 3-factor solution consisting of a blood-injection-injury factor, a situational-animal factor, and a height-related factor, confirmatory factor analysis showed that this 3-factor model and the DSM-IV-TR 4-factor model fitted the data equally well. The findings suggest that the structure of subclinical fears can be inferred from the DSM classification of phobia subtypes and that fears and phobias are two observable manifestations of a fear response along a continuum.

  12. A Comparative Study of Power Consumption Models for CPA Attack

    Directory of Open Access Journals (Sweden)

    Hassen Mestiri

    2013-03-01

    Full Text Available Power analysis attacks are types of side channel attacks that are based on analyzing the power consumption of the cryptographic devices. Correlation power analysis is a powerful and efficient cryptanalytic technique. It exploits the linear relation between the predicted power consumption and the real power consumption of cryptographic devices in order to recover the correct key. The predicted power consumption is determined by using the appropriate consumption model. Until now, only a few models have been proposed and used.In this paper, we describe the process to conduct the CPA attack against AES on SASEBO-GII board. We present a comparison between the Hamming Distance model and the Switching Distance model, in terms of number of power traces needed to recover the correct key using these models. The global successful rate achieves 100% at 11100 power traces. The power traces needed to recover the correct key have been decreased by 12.6% using a CPA attack with Switching Distance model.

  13. Comparative Analysis of Two Models of the Strouma River Ecosystem

    Directory of Open Access Journals (Sweden)

    Mitko Petrov

    2008-04-01

    Full Text Available A modified method of regression analysis for modelling of the water quality of river ecosystems is offered. The method is distinguished from the conventional regression analysis of that the factors included in the regression dependence are time functions. Two type functions are tested: polynomial and periodical. The investigations show better results the periodical functions give. In addition, a model for analysis of river quality has been developed, which is a modified method of the time series analysis. The model has been applied for an assessment of water pollution of the Strouma river. An assessment for adequately of the obtained model of the statistical criteria - correlation coefficient, Fisher function and relative error is developed and it shows that the models are adequate and they can be used for modelling of the water pollution on these indexes of the Strouma river. The analysis of the river pollution shows that there is not a materially increase of the anthropogenic impact of the Strouma river in the Bulgarian part for the period from 2001 to 2004.

  14. Comparative flood damage model assessment: towards a European approach

    Directory of Open Access Journals (Sweden)

    B. Jongman

    2012-12-01

    Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  15. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  16. Comparative bioenergetics modeling of two Lake Trout morphotypes

    Science.gov (United States)

    Kepler, Megan V.; Wagner, Tyler; Sweka, John A.

    2014-01-01

    Efforts to restore Lake Trout Salvelinus namaycush in the Laurentian Great Lakes have been hampered for decades by several factors, including overfishing and invasive species (e.g., parasitism by Sea Lampreys Petromyzon marinus and reproductive deficiencies associated with consumption of Alewives Alosa pseudoharengus). Restoration efforts are complicated by the presence of multiple body forms (i.e., morphotypes) of Lake Trout that differ in habitat utilization, prey consumption, lipid storage, and spawning preferences. Bioenergetics models constitute one tool that is used to help inform management and restoration decisions; however, bioenergetic differences among morphotypes have not been evaluated. The goal of this research was to investigate bioenergetic differences between two actively stocked morphotypes: lean and humper Lake Trout. We measured consumption and respiration rates across a wide range of temperatures (4–22°C) and size-classes (5–100 g) to develop bioenergetics models for juvenile Lake Trout. Bayesian estimation was used so that uncertainty could be propagated through final growth predictions. Differences between morphotypes were minimal, but when present, the differences were temperature and weight dependent. Basal respiration did not differ between morphotypes at any temperature or size-class. When growth and consumption differed between morphotypes, the differences were not consistent across the size ranges tested. Management scenarios utilizing the temperatures presently found in the Great Lakes (e.g., predicted growth at an average temperature of 11.7°C and 14.4°C during a 30-d period) demonstrated no difference in growth between the two morphotypes. Due to a lack of consistent differences between lean and humper Lake Trout, we developed a model that combined data from both morphotypes. The combined model yielded results similar to those of the morphotype-specific models, suggesting that accounting for morphotype differences may

  17. Comparing Multiple Discrepancies Theory to Affective Models of Subjective Wellbeing

    Science.gov (United States)

    Blore, Jed D.; Stokes, Mark A.; Mellor, David; Firth, Lucy; Cummins, Robert A.

    2011-01-01

    The Subjective Wellbeing (SWB) literature is replete with competing theories detailing the mechanisms underlying the construction and maintenance of SWB. The current study aimed to compare and contrast two of these approaches: multiple discrepancies theory (MDT) and an affective-cognitive theory of SWB. MDT posits SWB to be the result of perceived…

  18. Comparative pharmacodynamic modeling of desflurane, sevoflurane and isoflurane.

    NARCIS (Netherlands)

    Kreuer, S.; Bruhn, J.; Wilhelm, W.; Grundmann, U.; Rensing, H.; Ziegeler, S.

    2009-01-01

    BACKGROUND: We compared dose-response curves of the hypnotic effects of desflurane, sevoflurane and isoflurane. In addition, we analyzed the k(e0) values of the different anesthetics. The EEG parameters Bispectral index (BIS, Aspect Medical Systems, Natick, MA, version XP) and Narcotrend index (Moni

  19. Comparative homology modeling of human rhodopsin with several ...

    African Journals Online (AJOL)

    Yomi

    2012-01-05

    Jan 5, 2012 ... humble effort has been made to identify and remove the loopholes in the existing structure (1EDS) and model the 3D structure ... as seen in the cryo-electron microscopy studies, with the ... cannot be obtained or dissolved in large enough ... of non-bonded interaction between different atom types was done.

  20. Nature of Science and Models: Comparing Portuguese Prospective Teachers' Views

    Science.gov (United States)

    Torres, Joana; Vasconcelos, Clara

    2015-01-01

    Despite the relevance of nature of science and scientific models in science education, studies reveal that students do not possess adequate views regarding these topics. Bearing in mind that both teachers' views and knowledge strongly influence students' educational experiences, the main scope of this study was to evaluate Portuguese prospective…

  1. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  2. Nature of Science and Models: Comparing Portuguese Prospective Teachers' Views

    Science.gov (United States)

    Torres, Joana; Vasconcelos, Clara

    2015-01-01

    Despite the relevance of nature of science and scientific models in science education, studies reveal that students do not possess adequate views regarding these topics. Bearing in mind that both teachers' views and knowledge strongly influence students' educational experiences, the main scope of this study was to evaluate Portuguese prospective…

  3. Comparing the performance of species distribution models of

    NARCIS (Netherlands)

    Valle , M.; van Katwijk, M.M.; de Jong, D.J.; Bouma, T.; Schipper, A.M.; Chust, G.; Benito, B.M.; Garmendia, J.M.; Borja, A.

    2013-01-01

    Intertidal seagrasses show high variability in their extent and location, with local extinctions and (re-)colonizations being inherent in their population dynamics. Suitable habitats are identified usually using Species Distribution Models (SDM), based upon the overall distribution of the species;

  4. Comparative study and error analysis of digital elevation model interpolations

    Institute of Scientific and Technical Information of China (English)

    CHEN Ji-long; WU Wei; LIU Hong-bin

    2008-01-01

    Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.

  5. Locating Pleistocene Refugia: Comparing Phylogeographic and Ecological Niche Model Predictions

    Science.gov (United States)

    2007-07-01

    research groups [21,42–46], support the idea that the bioclimatic variables used in our ENM predictions (see Materials and Methods) are of importance to the...calibrating the downscaled LGM climate data to actual observed climate conditions. ENMs were based on the 19 bioclimatic variables in the WorldClim...phylogenetics and bioclimatic modeling. Systematic Biology 55: 785–802. 34. Graham CH, Ron SR, Santos JC, Schneider CJ, Moritz C (2004) Integrating

  6. COMPARATIVE STUDY ON ACCOUNTING MODELS "CASH" AND "ACCRUAL"

    OpenAIRE

    Tatiana Danescu; Luminita Rus

    2013-01-01

    Accounting, as a source of information, can recognize the economic transactionstaking into account the time of payment or receipt thereof, as soon as they occur. There are twobasic models of accounting: accrual basis and cash basis. In the cash accounting method thetransactions are recorded only when cash is received or paid, shall not make the difference betweenthe purchase of an asset and the payment of expenditure - both of which are considered"payments". Accrual accounting achieves this d...

  7. Successful treatment of acute myelogenous leukemia with favorable cytogenetics by reduced-intensity stem cell transplantation.

    Science.gov (United States)

    Kondo, Takeshi; Yasumoto, Atsushi; Arita, Kotaro; Sugita, Jun-Ichi; Shigematsu, Akio; Okada, Kohei; Takahata, Mutsumi; Onozawa, Masahiro; Kahata, Kaoru; Takeda, Yukari; Obara, Masato; Yamamoto, Satoshi; Endo, Tomoyuki; Nishio, Mitsufumi; Sato, Norihiro; Tanaka, Junji; Hashino, Satoshi; Koike, Takao; Asaka, Masahiro; Imamura, Masahiro

    2010-03-01

    Acute myelogenous leukemia (AML) with favorable cytogenetics responds well to chemotherapy. If the leukemia relapses, allogenic hematopoietic stem transplantation (allo-HSCT) is considered as a treatment option. Since the efficacy of reduced-intensity stem cell transplantation (RIST) for AML with favorable cytogenetics has not been established, we retrospectively analyzed the outcomes of allo-HSCT in AML patients according to cytogenetic risks. The outcome of allo-HSCT for AML patients with favorable cytogenetics seemed to be superior to that for AML patients with intermediate cytogenetics. In AML patients with favorable cytogenetics, the 3-year overall survival (OS) and relapse-free survival (RFS) rates were 88 and 76%, respectively, in the RIST group. Both the 3-year OS and RFS rates were 81% in the conventional stem cell transplantation (CST) group. The outcome of RIST for AML patients with favorable cytogenetics was comparable to that for patients who received CST despite the more advanced age and greater organ dysfunction in RIST group than in CST group. None of the patients died within 90 days after RIST. Moreover, there was no relapse in patients with favorable cytogenetics who were in hematological remission prior to RIST. Thus, RIST for AML patients with favorable cytogenetics in remission is safe and effective.

  8. Comparing models for vesicant responses in skin cells

    Energy Technology Data Exchange (ETDEWEB)

    Mershon, M.M.; Rhoads, L.S.; Petrali, J.P.; Mills, K.R.; Kim, S.K.

    1993-05-13

    Vesicant challenges have been delivered to NHEK (normal human epidermal keratinocyes) and to artificial human epidermal tissues. Confluent NHEK, grown on plastic surfaces or gel-coated microporous membranes of Millicell CMR inserts, were challenged with vesicants diluted in cell culture medium. Testskin was provided on agarose nutrient gel as a cornified wafer of sufficient diameter to receive vesicant vapor from cups normally used to challenge animal skin. Stratum corneum of preproduction EpiDerm (PreEpiD) specimens were challenged with vesicant vapor from cups suspended inside of Millicells. Inverted phase contrast microscopy of NHEK on plastic revealed dose-related vesicant effects that could facilitate screening of antivesicants. Scanning electron microscopy (SEM) revealed vesicant effects in two distinctly different populations of NHEK on gel-coated inserts. SEM and transmission electron microscopy (TEM) of Testskin and PreEpiD disclosed structural differences between these models that became amplified in vesicant-challenged specimens. PreEpiD shows more promise than Testskin for screening of antivesicant topical skin protectants. However, both epidermal models lack the basal lamina that is needed for advanced antivesicant testing.

  9. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  10. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  11. Lattice Boltzmann modeling of directional wetting: Comparing simulations to experiments

    Science.gov (United States)

    Jansen, H. Patrick; Sotthewes, Kai; van Swigchem, Jeroen; Zandvliet, Harold J. W.; Kooij, E. Stefan

    2013-07-01

    Lattice Boltzmann Modeling (LBM) simulations were performed on the dynamic behavior of liquid droplets on chemically striped patterned surfaces, ultimately with the aim to develop a predictive tool enabling reliable design of future experiments. The simulations accurately mimic experimental results, which have shown that water droplets on such surfaces adopt an elongated shape due to anisotropic preferential spreading. Details of the contact line motion such as advancing of the contact line in the direction perpendicular to the stripes exhibit pronounced similarities in experiments and simulations. The opposite of spreading, i.e., evaporation of water droplets, leads to a characteristic receding motion first in the direction parallel to the stripes, while the contact line remains pinned perpendicular to the stripes. Only when the aspect ratio is close to unity, the contact line also starts to recede in the perpendicular direction. Very similar behavior was observed in the LBM simulations. Finally, droplet movement can be induced by a gradient in surface wettability. LBM simulations show good semiquantitative agreement with experimental results of decanol droplets on a well-defined striped gradient, which move from high- to low-contact angle surfaces. Similarities and differences for all systems are described and discussed in terms of the predictive capabilities of LBM simulations to model direction wetting.

  12. Comparative modelling of chemical ordering in palladium-iridium nanoalloys

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Jack B. A.; Johnston, Roy L., E-mail: r.l.johnston@bham.ac.uk [School of Chemistry, University of Birmingham, Birmingham B15 2TT (United Kingdom); Rubinovich, Leonid; Polak, Micha, E-mail: mpolak@bgu.ac.il [Department of Chemistry, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2014-12-14

    Chemical ordering in “magic-number” palladium-iridium nanoalloys has been studied by means of density functional theory (DFT) computations, and compared to those obtained by the Free Energy Concentration Expansion Method (FCEM) using derived coordination dependent bond energy variations (CBEV), and by the Birmingham Cluster Genetic Algorithm using the Gupta potential. Several compositions have been studied for 38- and 79-atom particles as well as the site preference for a single Ir dopant atom in the 201-atom truncated octahedron (TO). The 79- and 38-atom nanoalloy homotops predicted for the TO by the FCEM/CBEV are shown to be, respectively, the global minima and competitive low energy minima. Significant reordering of minima predicted by the Gupta potential is seen after reoptimisation at the DFT level.

  13. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  14. Activity Modelling and Comparative Evaluation of WSN MAC Security Attacks

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    and initiate security attacks that disturb the normal functioning of the network in a severe manner. Such attacks affect the performance of the network by increasing the energy consumption, by reducing throughput and by inducing long delays. Of all existing WSN attacks, MAC layer attacks are considered....... The second aim of the paper is to simulate these attacks on hybrid MAC mechanisms, which shows the performance degradation of aWSN under the considered attacks. The modelling and implementation of the security attacks give an actual view of the network which can be useful in further investigating secure......Applications of wireless sensor networks (WSNs) are growing tremendously in the domains of habitat, tele-health, industry monitoring, vehicular networks, home automation and agriculture. This trend is a strong motivation for malicious users to increase their focus on WSNs and to develop...

  15. Comparing and modelling land use organization in cities

    Science.gov (United States)

    Lenormand, Maxime; Picornell, Miguel; Cantú-Ros, Oliva G.; Louail, Thomas; Herranz, Ricardo; Barthelemy, Marc; Frías-Martínez, Enrique; San Miguel, Maxi; Ramasco, José J.

    2015-01-01

    The advent of geolocated information and communication technologies opens the possibility of exploring how people use space in cities, bringing an important new tool for urban scientists and planners, especially for regions where data are scarce or not available. Here we apply a functional network approach to determine land use patterns from mobile phone records. The versatility of the method allows us to run a systematic comparison between Spanish cities of various sizes. The method detects four major land use types that correspond to different temporal patterns. The proportion of these types, their spatial organization and scaling show a strong similarity between all cities that breaks down at a very local scale, where land use mixing is specific to each urban area. Finally, we introduce a model inspired by Schelling's segregation, able to explain and reproduce these results with simple interaction rules between different land uses. PMID:27019730

  16. Comparing and modeling land use organization in cities

    CERN Document Server

    Lenormand, Maxime; Cantú-Ros, Oliva G; Louail, Thomas; Herranz, Ricardo; Barthelemy, Marc; Frías-Martínez, Enrique; Miguel, Maxi San; Ramasco, José J

    2015-01-01

    The advent of geolocated ICT technologies opens the possibility of exploring how people use space in cities, bringing an important new tool for urban scientists and planners, especially for regions where data is scarce or not available. Here we apply a functional network approach to determine land use patterns from mobile phone records. The versatility of the method allows us to run a systematic comparison between Spanish cities of various sizes. The method detects four major land use types that correspond to different temporal patterns. The proportion of these types, their spatial organization and scaling show a strong similarity between all cities that breaks down at a very local scale, where land use mixing is specific to each urban area. Finally, we introduce a model inspired by Schelling's segregation, able to explain and reproduce these results with simple interaction rules between different land uses.

  17. Comparative digital cartilage histology for human and common osteoarthritis models

    Directory of Open Access Journals (Sweden)

    Pedersen DR

    2013-02-01

    Full Text Available Douglas R Pedersen, Jessica E Goetz, Gail L Kurriger, James A MartinDepartment of Orthopaedics and Rehabilitation, University of Iowa, Iowa City, IA, USAPurpose: This study addresses the species-specific and site-specific details of weight-bearing articular cartilage zone depths and chondrocyte distributions among humans and common osteoarthritis (OA animal models using contemporary digital imaging tools. Histological analysis is the gold-standard research tool for evaluating cartilage health, OA severity, and treatment efficacy. Historically, evaluations were made by expert analysts. However, state-of-the-art tools have been developed that allow for digitization of entire histological sections for computer-aided analysis. Large volumes of common digital cartilage metrics directly complement elucidation of trends in OA inducement and concomitant potential treatments.Materials and methods: Sixteen fresh human knees, 26 adult New Zealand rabbit stifles, and 104 bovine lateral plateaus were measured for four cartilage zones and the cell densities within each zone. Each knee was divided into four weight-bearing sites: the medial and lateral plateaus and femoral condyles.Results: One-way analysis of variance followed by pairwise multiple comparisons (Holm–Sidak method at a significance of 0.05 clearly confirmed the variability between cartilage depths at each site, between sites in the same species, and between weight-bearing articular cartilage definitions in different species.Conclusion: The present study clearly demonstrates multisite, multispecies differences in normal weight-bearing articular cartilage, which can be objectively quantified by a common digital histology imaging technique. The clear site-specific differences in normal cartilage must be taken into consideration when characterizing the pathoetiology of OA models. Together, these provide a path to consistently analyze the volume and variety of histologic slides necessarily generated

  18. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Directory of Open Access Journals (Sweden)

    Göran Ståhl

    2016-02-01

    Full Text Available This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where models play a core role: model-assisted, model-based, and hybrid estimation. The first two are well known, whereas the third has only recently been introduced in forest surveys. Hybrid inference mixes designbased and model-based inference, since it relies on a probability sample of auxiliary data and a model predicting the target variable from the auxiliary data..We review studies on large-area forest surveys based on model-assisted, modelbased, and hybrid estimation, and discuss advantages and disadvantages of the approaches. We conclude that no general recommendations can be made about whether model-assisted, model-based, or hybrid estimation should be preferred. The choice depends on the objective of the survey and the possibilities to acquire appropriate field and remotely sensed data. We also conclude that modelling approaches can only be successfully applied for estimating target variables such as growing stock volume or biomass, which are adequately related to commonly available remotely sensed data, and thus purely field based surveys remain important for several important forest parameters. Keywords: Design-based inference, Model-assisted estimation, Model-based inference, Hybrid inference, National forest inventory, Remote sensing, Sampling

  19. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    Science.gov (United States)

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  20. A Comparative Analysis of Community Wind Power DevelopmentModels

    Energy Technology Data Exchange (ETDEWEB)

    Bolinger, Mark; Wiser, Ryan; Wind, Tom; Juhl, Dan; Grace, Robert; West, Peter

    2005-05-20

    For years, farmers in the United States have looked with envy on their European counterparts ability to profitably farm the wind through ownership of distributed, utility-scale wind projects. Only within the past few years, however, has farmer- or community-owned windpower development become a reality in the United States. The primary hurdle to this type of development in the United States has been devising and implementing suitable business and legal structures that enable such projects to take advantage of tax-based federal incentives for windpower. This article discusses the limitations of such incentives in supporting farmer- or community-owned wind projects, describes four ownership structures that potentially overcome such limitations, and finally conducts comparative financial analysis on those four structures, using as an example a hypothetical 1.5 MW farmer-owned project located in the state of Oregon. We find that material differences in the competitiveness of each structure do exist, but that choosing the best structure for a given project will largely depend on the conditions at hand; e.g., the ability of the farmer(s) to utilize tax credits, preference for individual versus cooperative ownership, and the state and utility service territory in which the project will be located.

  1. Spheroid model study comparing the biocompatibility of Biodentine and MTA.

    Science.gov (United States)

    Pérard, Matthieu; Le Clerc, Justine; Watrin, Tanguy; Meary, Fleur; Pérez, Fabienne; Tricot-Doleux, Sylvie; Pellen-Mussi, Pascal

    2013-06-01

    The primary objective of this study was to assess the biological effects of a new dentine substitute based on Ca₃SiO₅ (Biodentine™) for use in pulp-capping treatment, on pseudo-odontoblastic (MDPC-23) and pulp (Od-21) cells. The secondary objective was to evaluate the effects of Biodentine and mineral trioxide aggregate (MTA) on gene expression in cultured spheroids. We used the acid phosphatase assay to compare the biocompatibility of Biodentine and MTA. Cell differentiation was investigated by RT-qPCR. We investigated the expression of genes involved in odontogenic differentiation (Runx2), matrix secretion (Col1a1, Spp1) and mineralisation (Alp). ANOVA and PLSD tests were used for data analysis. MDPC-23 cells cultured in the presence of MTA had higher levels of viability than those cultured in the presence of Biodentine and control cells on day 7 (P = 0.0065 and P = 0.0126, respectively). For Od-21 cells, proliferation rates on day 7 were significantly lower in the presence of Biodentine or MTA than for control (P MTA than in those cultured in the presence of Biodentine and in control cells. Biodentine and MTA may modify the proliferation of pulp cell lines. Their effects may fluctuate over time, depending on the cell line considered. The observed similarity between Biodentine and MTA validates the indication for direct pulp-capping claimed by the manufacturers.

  2. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  3. In silico Structural Prediction of E6 and E7 Proteins of Human Papillomavirus Strains by Comparative Modeling

    Directory of Open Access Journals (Sweden)

    Satish Kumar

    2012-07-01

    Full Text Available More than 200 different types of Human papillomavirus (HPV are identified, 40 transmit extensively through sexual contacts affecting the genital tract. HPV strains have been etiologically linked to vaginal, vulvar, penile, anal, oral and cervical cancer (99.7% as a result of mutations leading to cell transformations due to interference of E6 and E7 oncoproteins with p53 and pRB tumor suppressor genes respectively, besides other cellular proteins. As structures of E6 and E7 proteins are not available, the simultaneous structural analysis of E6 and E7 proteins of 50 different HPV strains was carried out in detail for prediction and validation, using bioinformatics tools. E6 and E7 proteins sequences were retrieved in FASTA format from NCBI and their structures predicted by comparative modeling using modeller9v6 software. Further, most of the HPV strains showed good stereochemistry results in most favored regions when subjected to PROCHECK analysis and subsequently each protein was validated using ProSA-web tool. The work carried out on comparing and exploring the structural variations in these oncogenic proteins might help in genome based drugs and vaccines designing, beyond their limitations.

  4. Pore water pressure variations in Subpermafrost groundwater : Numerical modeling compared with experimental modeling

    Science.gov (United States)

    Rivière, Agnès.; Goncalves, Julio; Jost, Anne; Font, Marianne

    2010-05-01

    Development and degradation of permafrost directly affect numerous hydrogeological processes such as thermal regime, exchange between river and groundwater, groundwater flows patterns and groundwater recharge (Michel, 1994). Groundwater in permafrost area is subdivided into two zones: suprapermafrost and subpermafrost which are separated by permafrost. As a result of the volumetric expansion of water upon freezing and assuming ice lenses and frost heave do not form freezing in a saturated aquifer, the progressive formation of permafrost leads to the pressurization of the subpermafrost groundwater (Wang, 2006). Therefore disappearance or aggradation of permafrost modifies the confined or unconfined state of subpermafrost groundwater. Our study focuses on modifications of pore water pressure of subpermafrost groundwater which could appear during thawing and freezing of soil. Numerical simulation allows elucidation of some of these processes. Our numerical model accounts for phase changes for coupled heat transport and variably saturated flow involving cycles of freezing and thawing. The flow model is a combination of a one-dimensional channel flow model which uses Manning-Strickler equation and a two-dimensional vertically groundwater flow model using Richards equation. Numerical simulation of heat transport consisted in a two dimensional model accounting for the effects of latent heat of phase change of water associated with melting/freezing cycles which incorporated the advection-diffusion equation describing heat-transfer in porous media. The change of hydraulic conductivity and thermal conductivity are considered by our numerical model. The model was evaluated by comparing predictions with data from laboratory freezing experiments. Experimental design was undertaken at the Laboratory M2C (Univesité de Caen-Basse Normandie, CNRS, France). The device consisted of a Plexiglas box insulated on all sides except on the top. Precipitation and ambient temperature are

  5. What favors the occurrence of subduction mega-earthquakes?

    Science.gov (United States)

    Brizzi, Silvia; Funiciello, Francesca; Corbi, Fabio; Sandri, Laura; van Zelst, Iris; Heuret, Arnauld; Piromallo, Claudia; van Dinther, Ylona

    2017-04-01

    Most of mega-earthquakes (MEqs; Mw > 8.5) occur at shallow depths along the subduction thrust fault (STF). The contribution of each subduction zone to the globally released seismic moment is not homogenous, as well as the maximum recorded magnitude MMax. Highlighting the ingredients likely responsible for MEqs nucleation has great implications for hazard assessment. In this work, we investigate the conditions favoring the occurrence of MEqs with a multi-disciplinary approach based on: i) multivariate statistics, ii) analogue- and iii) numerical modelling. Previous works have investigated the potential dependence between STF seismicity and various subduction zone parameters using simple regression models. Correlations are generally weak due to the limited instrumental seismic record and multi-parameter influence, which make the forecasting of the potential MMax rather difficult. To unravel the multi-parameter influence, we perform a multivariate statistical study (i.e., Pattern Recognition, PR) of the global database on convergent margins (Heuret et al., 2011), which includes seismological, geometrical, kinematic and physical parameters of 62 subduction segments. PR is based on the classification of objects (i.e., subduction segments) belonging to different classes through the identification of possible repetitive patterns. Tests have been performed using different MMax datasets and combination of inputs to indirectly test the stability of the identified patterns. Results show that the trench-parallel width of the subducting slab (Wtrench) and the sediment thickness at the trench (Tsed) are the most recurring parameters for MEqs occurrence. These features are mostly consistent, independently of the MMax dataset and combination of inputs used for the analysis. MEqs thus seem to be promoted for high Wtrench and Tsed, as their combination may potentially favor extreme (i.e., in the order of thousands of km) trench-parallel rupture propagation. To tackle the

  6. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  7. MOC: Reducing Favorable Balance, 2007's Highest Priority

    Institute of Scientific and Technical Information of China (English)

    LIU Shuang

    2007-01-01

    @@ When attending the National Working Conference on Commerce on January 15th, Mr. Bo Xilai, the Minister of Commerce, said that the highest priority for China's international trade in 2007 is to reduce the favorable balance.

  8. 18 CFR 706.303 - Gifts, entertainment, and favors.

    Science.gov (United States)

    2010-04-01

    ... solicit from a person having business with the Council anything of value as a gift, gratuity, loan, entertainment, or favor for himself or another person, particularly one with whom he has family, business,...

  9. MultiMetEval : Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    NARCIS (Netherlands)

    Zakrzewski, Piotr; Medema, Marnix H.; Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko; Fong, Stephen S.

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the co

  10. Development of an instrument to assess the impact of an enhanced experiential model on pharmacy students' learning opportunities, skills and attitudes: A retrospective comparative-experimentalist study

    Directory of Open Access Journals (Sweden)

    Collins John B

    2008-04-01

    Full Text Available Abstract Background Pharmacy schools across North America have been charged to ensure their students are adequately skilled in the principles and practices of pharmaceutical care. Despite this mandate, a large percentage of students experience insufficient opportunities to practice the activities, tasks and processes essential to pharmaceutical care. The objective of this retrospective study of pharmacy students was to: (1 as "proof of concept", test the overall educational impact of an enhanced advanced pharmacy practice experiential (APPE model on student competencies; (2 develop an instrument to measure students' and preceptors' experiences; and (3 assess the psychometric properties of the instrument. Methods A comparative-experimental design, using student and preceptor surveys, was used to evaluate the impact of the enhanced community-based APPE over the traditional APPE model. The study was grounded in a 5-stage learning model: (1 an enhanced learning climate leads to (2 better utilization of learning opportunities, including (3 more frequent student/patient consultation, then to (4 improved skills acquisition, thence to (5 more favorable attitudes toward pharmaceutical care practice. The intervention included a one-day preceptor workshop, a comprehensive on-site student orientation and extending the experience from two four-week experiences in different pharmacies to one eight-week in one pharmacy. Results The 35 student and 38 preceptor survey results favored the enhanced model; with students conducting many more patient consultations and reporting greater skills improvement. In addition, the student self-assessment suggested changes in attitudes favoring pharmaceutical care principles. Psychometric testing showed the instrument to be sensitive, valid and reliable in ascertaining differences between the enhanced and traditional arms. Conclusion The enhanced experiential model positively affects learning opportunities and competency

  11. Mechanical heterogeneity favors fragmentation of strained actin filaments.

    Science.gov (United States)

    De La Cruz, Enrique M; Martiel, Jean-Louis; Blanchoin, Laurent

    2015-05-05

    We present a general model of actin filament deformation and fragmentation in response to compressive forces. The elastic free energy density along filaments is determined by their shape and mechanical properties, which were modeled in terms of bending, twisting, and twist-bend coupling elasticities. The elastic energy stored in filament deformation (i.e., strain) tilts the fragmentation-annealing reaction free-energy profile to favor fragmentation. The energy gradient introduces a local shear force that accelerates filament intersubunit bond rupture. The severing protein, cofilin, renders filaments more compliant in bending and twisting. As a result, filaments that are partially decorated with cofilin are mechanically heterogeneous (i.e., nonuniform) and display asymmetric shape deformations and energy profiles distinct from mechanically homogenous (i.e., uniform), bare actin, or saturated cofilactin filaments. The local buckling strain depends on the relative size of the compliant segment as well as the bending and twisting rigidities of flanking regions. Filaments with a single bare/cofilin-decorated boundary localize energy and force adjacent to the boundary, within the compliant cofilactin segment. Filaments with small cofilin clusters were predicted to fragment within the compliant cofilactin rather than at boundaries. Neglecting contributions from twist-bend coupling elasticity underestimates the energy density and gradients along filaments, and thus the net effects of filament strain to fragmentation. Spatial confinement causes compliant cofilactin segments and filaments to adopt higher deformation modes and store more elastic energy, thereby promoting fragmentation. The theory and simulations presented here establish a quantitative relationship between actin filament fragmentation thermodynamics and elasticity, and reveal how local discontinuities in filament mechanical properties introduced by regulatory proteins can modulate both the severing efficiency

  12. Modeling Nonlinear Power Amplifiers in OFDM Systems from Subsampled Data: A Comparative Study Using Real Measurements

    Directory of Open Access Journals (Sweden)

    Santamaría Ignacio

    2003-01-01

    Full Text Available A comparative study among several nonlinear high-power amplifier (HPA models using real measurements is carried out. The analysis is focused on specific models for wideband OFDM signals, which are known to be very sensitive to nonlinear distortion. Moreover, unlike conventional techniques, which typically use a single-tone test signal and power measurements, in this study the models are fitted using subsampled time-domain data. The in-band and out-of-band (spectral regrowth performances of the following models are evaluated and compared: Saleh's model, envelope polynomial model (EPM, Volterra model, the multilayer perceptron (MLP model, and the smoothed piecewise-linear (SPWL model. The study shows that the SPWL model provides the best in-band characterization of the HPA. On the other hand, the Volterra model provides a good trade-off between model complexity (number of parameters and performance.

  13. Favorability for uranium in tertiary sedimentary rocks, southwestern Montana

    Energy Technology Data Exchange (ETDEWEB)

    Wopat, M A; Curry, W E; Robins, J W; Marjaniemi, D K

    1977-10-01

    Tertiary sedimentary rocks in the basins of southwestern Montana were studied to determine their favorability for potential uranium resources. Uranium in the Tertiary sedimentary rocks was probably derived from the Boulder batholith and from silicic volcanic material. The batholith contains numerous uranium occurrences and is the most favorable plutonic source for uranium in the study area. Subjective favorability categories of good, moderate, and poor, based on the number and type of favorable criteria present, were used to classify the rock sequences studied. Rocks judged to have good favorability for uranium deposits are (1) Eocene and Oligocene strata and undifferentiated Tertiary rocks in the western Three Forks basin and (2) Oligocene rocks in the Helena basin. Rocks having moderate favorability consist of (1) Eocene and Oligocene strata in the Jefferson River, Beaverhead River, and lower Ruby River basins, (2) Oligocene rocks in the Townsend and Clarkston basins, (3) Miocene and Pliocene rocks in the Upper Ruby River basin, and (4) all Tertiary sedimentary formations in the eastern Three Forks basin, and in the Grasshopper Creek, Horse Prairie, Medicine Lodge Creek, Big Sheep Creek, Deer Lodge, Big Hole River, and Bull Creek basins. The following have poor favorability: (1) the Beaverhead Conglomerate in the Red Rock and Centennial basins, (2) Eocene and Oligocene rocks in the Upper Ruby River basin, (3) Miocene and Pliocene rocks in the Townsend, Clarkston, Smith River, and Divide Creek basins, (4) Miocene through Pleistocene rocks in the Jefferson River, Beaverhead River, and Lower Ruby River basins, and (5) all Tertiary sedimentary rocks in the Boulder River, Sage Creek, Muddy Creek, Madison River, Flint Creek, Gold Creek, and Bitterroot basins.

  14. MODBASE: a database of annotated comparative protein structure models and associated resources

    OpenAIRE

    Pieper, Ursula; Eswar, Narayanan; Davis, Fred P.; Braberg, Hannes; Madhusudhan, M. S.; Rossi, Andrea; Marti-Renom, Marc; Karchin, Rachel; Webb, Ben M.; Eramian, David; Shen, Min-Yi; Kelly, Libusha; Melo, Francisco; Sali, Andrej

    2005-01-01

    MODBASE () is a database of annotated comparative protein structure models for all available protein sequences that can be matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on MODELLER for fold assignment, sequence–structure alignment, model building and model assessment (). MODBASE is updated regularly to reflect the growth in protein sequence and structure databases, and improvements in the software for calculat...

  15. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, R.J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  16. A Comparative Structural Equation Modeling Investigation of the Relationships among Teaching, Cognitive and Social Presence

    Science.gov (United States)

    Kozan, Kadir

    2016-01-01

    The present study investigated the relationships among teaching, cognitive, and social presence through several structural equation models to see which model would better fit the data. To this end, the present study employed and compared several different structural equation models because different models could fit the data equally well. Among…

  17. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to

  18. Comparing of four IRT models when analyzing two tests for inductive reasoning

    NARCIS (Netherlands)

    de Koning, E.; Sijtsma, K.; Hamers, J.H.M.

    2002-01-01

    This article discusses the use of the nonparametric IRT Mokken models of monotone homogeneity and double monotonicity and the parametric Rasch and Verhelst models for the analysis of binary test data. First, the four IRT models are discussed and compared at the theoretical level, and for each model,

  19. A Comparative Study of Neural Networks and Fuzzy Systems in Modeling of a Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Metin Demirtas

    2011-07-01

    Full Text Available The aim of this paper is to compare the neural networks and fuzzy modeling approaches on a nonlinear system. We have taken Permanent Magnet Brushless Direct Current (PMBDC motor data and have generated models using both approaches. The predictive performance of both methods was compared on the data set for model configurations. The paper describes the results of these tests and discusses the effects of changing model parameters on predictive and practical performance. Modeling sensitivity was used to compare for two methods.

  20. Transport of solids in protoplanetary disks: Comparing meteorites and astrophysical models

    CERN Document Server

    Jacquet, Emmanuel

    2014-01-01

    We review models of chondrite component transport in the gaseous protoplanetary disk. Refractory inclusions were likely transported by turbulent diffusion and possible early disk expansion, and required low turbulence for their subsequent preservation in the disk, possibly in a dead zone. Chondrules were produced locally but did not necessarily accrete shortly after formation. Water may have been enhanced in the inner disk because of inward drift of solids from further out, but likely not by more than a factor of a few. Incomplete condensation in chondrites may be due to slow reaction kinetics during temperature decrease. While carbonaceous chondrite compositions might be reproduced in a ``two-component'' picture (Anders 1964), such components would not correspond to simple petrographic constituents, although part of the refractory element fractionations in chondrites may be due to the inward drift of refractory inclusions. Overall, considerations of chondrite component transport alone favor an earlier format...

  1. Applying fuzzy logic to comparative distribution modelling: a case study with two sympatric amphibians.

    Science.gov (United States)

    Barbosa, A Márcia; Real, Raimundo

    2012-01-01

    We modelled the distributions of two toads (Bufo bufo and Epidalea calamita) in the Iberian Peninsula using the favourability function, which makes predictions directly comparable for different species and allows fuzzy logic operations to relate different models. The fuzzy intersection between individual models, representing favourability for the presence of both species simultaneously, was compared with another favourability model built on the presences shared by both species. The fuzzy union between individual models, representing favourability for the presence of any of the two species, was compared with another favourability model based on the presences of either or both of them. The fuzzy intersections between favourability for each species and the complementary of favourability for the other (corresponding to the logical operation "A and not B") were compared with models of exclusive presence of one species versus the exclusive presence of the other. The results of modelling combined species data were highly similar to those of fuzzy logic operations between individual models, proving fuzzy logic and the favourability function valuable for comparative distribution modelling. We highlight several advantages of fuzzy logic over other forms of combining distribution models, including the possibility to combine multiple species models for management and conservation planning.

  2. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  3. Comparative study of model prediction of diffuse nutrient losses in response to changes in agricultural practices

    NARCIS (Netherlands)

    Vagstad, N.; French, H.K.; Andersen, H.E.; Groenendijk, P.; Siderius, C.

    2009-01-01

    This article presents a comparative study of modelled changes in nutrient losses from two European catchments caused by modifications in agricultural practices. The purpose was not to compare the actual models used, but rather to assess the uncertainties a manager may be faced with after receiving d

  4. A Comparative Study of the Jerome Model and the Horace Model

    Institute of Scientific and Technical Information of China (English)

    李子英

    2014-01-01

    For many years, the two important translation styles, the Jerome model and the Horace model, are widely studied. Consisting of four parts, this paper will focus on making a comparison between the two translation models.

  5. Frequency Affects Object Relative Clause Processing: Some Evidence in Favor of Usage-Based Accounts

    Science.gov (United States)

    Reali, Florencia

    2014-01-01

    The processing difficulty of nested grammatical structure has been explained by different psycholinguistic theories. Here I provide corpus and behavioral evidence in favor of usage-based models, focusing on the case of object relative clauses in Spanish as a first language. A corpus analysis of spoken Spanish reveals that, as in English, the…

  6. Using data assimilation to compare models of Mars and Venus atmospheres with observations

    Science.gov (United States)

    Navarro, Thomas; Forget, Francois; Millour, Ehouarn

    2016-10-01

    Data assimilation is a technique that optimally reconstructs a best estimate of the atmospheric state by combining observations and an a priori provided by a numerical model. The aim of data assimilation is to extrapolate in space and time observations of the atmosphere with the means of a model in order to recover the state of the atmosphere as completely and as accurately as possible.In this work, we employ a state-of-the-art Martian Global Climate Model to assimilate vertical profiles of atmospheric temperature, airborne dust, and water ice clouds retrieved from observations of the Mars Climate Sounder onboard the Mars Reconnaissance Orbiter. The assimilation is carried out using an Ensemble Kalman Filter technique, that maps covariances between model variables. Therefore, observations of one variable (e.g. temperature) can be used to estimate other unobserved variables (e.g. winds), using covariances constructed from an ensemble of model simulations for which initial states slightly differ. Using this method, one can estimate dust from temperature observations only, confirming the presence of detached layers of dust in the atmosphere from their thermal signature. Then, the joint assimilation of temperature, dust, and water ice clouds shows that the performance of the assimilation is limited due to model biases, such as an incorrect phasing of the thermal tide and observed dust diurnal variations unexplained by a model. However, dust estimation makes possible the predictability of the atmosphere, up to around ten days in the most favorable cases, a great improvement over previous studies.Future developments for an improved assimilation strongly suggest to assimilate model parameters, such as the ones for the representation of parameterized atmospheric gravity waves.Also, in the light of the recent global observations of the Venusian atmosphere from the Akastuki spacecraft, the case for the first-ever assimilation of Venus will be made.

  7. Material Models Used to Predict Spring-in of Composite Elements: a Comparative Study

    Science.gov (United States)

    Galińska, Anna

    2017-02-01

    There have been several approaches used in the modelling of the process-induced deformations of composite parts developed so far. The most universal and most frequently used approach is the FEM modelling. In the scope of the FEM modelling several material models have been used to model the composite behaviour. In the present work two of the most popular material models: elastic and CHILE (cure hardening instantaneous linear elastic) are used to model the spring-in deformations of composite specimens and a structure fragment. The elastic model is more effective, whereas the CHILE model is considered more accurate. The results of the models are compared with each other and with the measured deformations of the real composite parts. Such a comparison shows that both models allow to predict the deformations reasonably well and that there is little difference between their results. This leads to a conclusion that the use of the simpler elastic model is a valid engineering practice.

  8. Functional properties of soybean nodulin 26 from a comparative three-dimensional model.

    Science.gov (United States)

    Biswas, Sampa

    2004-01-30

    A model of the nodulin 26 channel protein has been constructed based on comparative modeling and molecular dynamics simulations. Structural features of the protein indicate a selectivity filter that differs from those of the known structures of Escherichia coli glycerol facilitator and mammalian aquaporin 1. The model structure also reveals important roles of Ser207 and Phe96 in ligand binding and transport.

  9. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Science.gov (United States)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  10. Favorable Policy for Foreign Investors in Dagang Oilfield

    Institute of Scientific and Technical Information of China (English)

    Liu Mingfa

    1995-01-01

    @@ Apart from series kinds of good products, advanced techniques, convenient communications and electric power and water supply in Dagand Oilfield, foreign investors who have investment in New Industrial Area of Dagang Oilfield will enjoy the same favourable policies as Tianjin Economic Develoment Zone. Main favorable policies are follows:

  11. Comparing Supply-Side Specifications in Models of Global Agriculture and the Food System

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, Sherman; van Meijl, Hans; Willenbockel, Dirk; Valin, Hugo; Fujimori, Shinichiro; Masui, Toshihiko; Sands, Ronald; Wise, Marshall A.; Calvin, Katherine V.; Havlik, Petr; Mason d' Croz, Daniel; Tabeau, Andrzej; Kavallari, Aikaterini; Schmitz, Christoph; Dietrich, Jan P.; von Lampe, Martin

    2014-01-01

    This paper compares the theoretical specification of production and technical change across the partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two modeling approaches have different theoretical underpinnings concerning the scope of economic activity they capture and how they represent technology and the behavior of supply and demand in markets. This paper focuses on their different specifications of technology and supply behavior, comparing their theoretical and empirical treatments. While the models differ widely in their specifications of technology, both within and between the PE and CGE classes of models, we find that the theoretical responsiveness of supply to changes in prices can be similar, depending on parameter choices that define the behavior of supply functions over the domain of applicability defined by the common scenarios used in the AgMIP comparisons. In particular, we compare the theoretical specification of supply in CGE models with neoclassical production functions and PE models that focus on land and crop yields in agriculture. In practice, however, comparability of results given parameter choices is an empirical question, and the models differ in their sensitivity to variations in specification. To illustrate the issues, sensitivity analysis is done with one global CGE model, MAGNET, to indicate how the results vary with different specification of technical change, and how they compare with the results from PE models.

  12. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    Science.gov (United States)

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  13. Comparative analysis of mathematical models of the matrix photodetector used in digital holography

    Science.gov (United States)

    Grebenyuk, K. A.

    2017-08-01

    It is established, that in modern works on digital holography, three fundamentally different mathematical models of a matrix photodetector are used. Comparative analysis of these models, including analysis of the formula of each model and test calculations, has been conducted. The possibility of using these models to account for the influence of geometrical parameters of a matrix photodetector on the properties of recorded digital holograms is considered.

  14. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... that the FMF-model gives adequate description of the empirical data using model parameters characteristic of the material....

  15. Comparing clouds and their seasonal variations in 10 atmospheric general circulation models with satellite measurements

    OpenAIRE

    Zhang, M.; Lin, W.; Klein, S.; J. Bacmeister; Bony, S.; Cederwall, R.; Del Genio, A; Hack, J.; Loeb, N.; Lohmann, U.; P. Minnis; Musat, I.; Pincus, R; Stier, P.; Suarez, M.

    2005-01-01

    To assess the current status of climate models in simulating clouds, basic cloud climatologies from ten atmospheric general circulation models are compared with satellite measurements from the International Satellite Cloud Climatology Project (ISCCP) and the Clouds and Earth's Radiant Energy System (CERES) program. An ISCCP simulator is employed in all models to facilitate the comparison. Models simulated a four-fold difference in high-top clouds. There are also, however, large uncertainties ...

  16. Comparative study of PEM fuel cell models for integration in propulsion systems of urban public transport

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, P.; Fernandez, L.M.; Garcia, C.A. [Department of Electrical Engineering, University of Cadiz, EPS Algeciras, Avda. Ramon Puyol, s/n. 11202 Algeciras (Cadiz) (Spain); Jurado, F. [Department of Electrical Engineering, University of Jaen, EPS Linares, C/ Alfonso X, n 28. 23700 Linares (Jaen) (Spain)

    2010-12-15

    Steady state and dynamic simulations are performed in order to compare the models. Considering the external response of FC system integrated in the tramway hybrid system, both reduced models show similar results with an important reduction of computation time with respect to the complete model. However, the reduced model 1 shows better results than the reduced model 2 when representing the internal behaviour of FC system, so that this model is considered the most appropriate for propulsion system applications. (Copyright copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Jackson System Development, Entity-relationship Analysis and Data Flow Models: a comparative study

    OpenAIRE

    Wieringa, R.J.

    1994-01-01

    This report compares JSD with ER modeling and data flow modeling. It is shown that JSD can be combined with ER modeling and that the result is a richer method than either of the two. The resulting method can serve as a basis for a pratical object-oriented modeling method and has some resemblance to parts of well-known methods, like OMT. It is also argued that JSD and data flow modeling rest on opposite philosophies and cannot be combined in one modeling effort. This is illustrated by transfor...

  18. Comparing predictive validity of four ballistic swing phase models of human walking.

    Science.gov (United States)

    Selles, R W; Bussmann, J B; Wagenaar, R C; Stam, H J

    2001-09-01

    It is unclear to what extent ballistic walking models can be used to qualitatively predict the swing phase at comfortable walking speed. Different study findings regarding the accuracy of the predictions of the swing phase kinematics may have been caused by differences in (1) kinematic input, (2) model characteristics (e.g. the number of segments), and (3) evaluation criteria. In the present study, the predictive validity of four ballistic swing phase models was evaluated and compared, that is, (1) the ballistic walking model as originally introduced by Mochon and McMahon, (2) an extended version of this model in which heel-off of the stance leg is added, (3) a double pendulum model, consisting of a two-segment swing leg with a prescribed hip trajectory, and (4) a shank pendulum model consisting of a shank and rigidly attached foot with a prescribed knee trajectory. The predictive validity was evaluated by comparing the outcome of the model simulations with experimentally derived swing phase kinematics of six healthy subjects. In all models, statistically significant differences were found between model output and experimental data. All models underestimated swing time and step length. In addition, statistically significant differences were found between the output of the different models. The present study shows that although qualitative similarities exist between the ballistic models and normal gait at comfortable walking speed, these models cannot adequately predict swing phase kinematics.

  19. Modeling Mixed Bicycle Traffic Flow: A Comparative Study on the Cellular Automata Approach

    Directory of Open Access Journals (Sweden)

    Dan Zhou

    2015-01-01

    Full Text Available Simulation, as a powerful tool for evaluating transportation systems, has been widely used in transportation planning, management, and operations. Most of the simulation models are focused on motorized vehicles, and the modeling of nonmotorized vehicles is ignored. The cellular automata (CA model is a very important simulation approach and is widely used for motorized vehicle traffic. The Nagel-Schreckenberg (NS CA model and the multivalue CA (M-CA model are two categories of CA model that have been used in previous studies on bicycle traffic flow. This paper improves on these two CA models and also compares their characteristics. It introduces a two-lane NS CA model and M-CA model for both regular bicycles (RBs and electric bicycles (EBs. In the research for this paper, many cases, featuring different values for the slowing down probability, lane-changing probability, and proportion of EBs, were simulated, while the fundamental diagrams and capacities of the proposed models were analyzed and compared between the two models. Field data were collected for the evaluation of the two models. The results show that the M-CA model exhibits more stable performance than the two-lane NS model and provides results that are closer to real bicycle traffic.

  20. A generalização a partir do tratamento com fricativas: ambientes favoráveis versus ambientes pouco favoráveis e neutros The generalization through the treatment with fricatives: favorable environments versus unfavorable and neutral environments

    Directory of Open Access Journals (Sweden)

    Fernanda Marafiga Wiethan

    2013-01-01

    Full Text Available O objetivo deste estudo foi analisar e comparar a ocorrência e os tipos de generalização observados a partir do tratamento das fricativas /z/, /ζ/ e /k/ em dois grupos de crianças, um utilizando palavras com contextos fonológicos favoráveis e outro os contextos pouco favoráveis e neutros. Seis crianças com desvio fonológico e idades entre 4:7 e 7:8 participaram do estudo com a autorização dos responsáveis. Realizaram-se avaliações fonoaudiológicas e complementares para o diagnóstico do desvio fonológico. Os sujeitos foram pareados de acordo com a gravidade do desvio, sexo, faixa etária e aspectos do sistema fonológico em relação aos fonemas alterados. Metade das crianças foi tratada com palavras em que os fonemas /z/, /ζ/ e /k/ encontravam-se em ambientes fonológicos favoráveis e a outra metade com ambientes pouco favoráveis e neutros. Foram realizadas oito sessões e, após estas, nova avaliação de fala foi realizada para verificar os tipos de generalização obtidos. Os percentuais de generalizações foram comparados entre os grupos por meio do teste estatístico de Mann-Whitney (pThe aim of this study was to analyze and compare the occurrence and the types of generalization observed through the treatment of the fricatives /z/, /ζ/ and /k/ in two groups of children, one using words with favorable phonological contexts and another with unfavorable and neutral contexts. Six children with phonological disorder between 4:7 and 7:8 year-old participated in the study, with their parents' authorization. There were speech-language and complementary evaluations to diagnose the phonological disorder. The subjects were matched according to the severity of the phonological disorder, sex, age and aspects of the phonological system in relation to the altered phonemes. Half the children were treated with words in which the phonemes /z/, /ζ/ and /k/ were in favorable phonological environments and the other half with unfavorable

  1. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.;

    2005-01-01

    .15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...... discussed in the framework of recent representative models (hard sphere scheme, friction theory, and free volume model) and with mixing laws and two empirical models (particularly the LBC model which is commonly used in petroleum engineering, and the self-referencing model). This comparative study shows...

  2. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  3. On Comparing NWP and Radar Nowcast Models for Forecasting of Urban Runoff

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Bøvith, T.; Rasmussen, Michael R.;

    2012-01-01

    The paper compares quantitative precipitation forecasts using weather radars and numerical weather prediction models. In order to test forecasts under different conditions, point-comparisons with quantitative radar precipitation estimates and raingauges are presented. Furthermore, spatial...

  4. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  5. AeroCom INSITU Project: Comparing modeled and measured aerosol optical properties

    Science.gov (United States)

    Andrews, Elisabeth; Schmeisser, Lauren; Schulz, Michael; Fiebig, Markus; Ogren, John; Bian, Huisheng; Chin, Mian; Easter, Richard; Ghan, Steve; Kokkola, Harri; Laakso, Anton; Myhre, Gunnar; Randles, Cynthia; da Silva, Arlindo; Stier, Phillip; Skeie, Ragnehild; Takemura, Toshihiko; van Noije, Twan; Zhang, Kai

    2016-04-01

    AeroCom, an open international collaboration of scientists seeking to improve global aerosol models, recently initiated a project comparing model output to in-situ, surface-based measurements of aerosol optical properties. The model/measurement comparison project, called INSITU, aims to evaluate the performance of a suite of AeroCom aerosol models with site-specific observational data in order to inform iterative improvements to model aerosol modules. Surface in-situ data has the unique property of being traceable to physical standards, which is an asset in accomplishing the overall goal of bettering the accuracy of aerosols processes and the predicative capability of global climate models. Here we compare dry, in-situ aerosol scattering and absorption data from ~75 surface, in-situ sites from various global aerosol networks (including NOAA, EUSAAR/ACTRIS and GAW) with a simulated optical properties from a suite of models participating in the AeroCom project. We report how well models reproduce aerosol climatologies for a variety of time scales, aerosol characteristics and behaviors (e.g., aerosol persistence and the systematic relationships between aerosol optical properties), and aerosol trends. Though INSITU is a multi-year endeavor, preliminary phases of the analysis suggest substantial model biases in absorption and scattering coefficients compared to surface measurements, though the sign and magnitude of the bias varies with location. Spatial patterns in the biases highlight model weaknesses, e.g., the inability of models to properly simulate aerosol characteristics at sites with complex topography. Additionally, differences in modeled and measured systematic variability of aerosol optical properties suggest that some models are not accurately capturing specific aerosol behaviors, for example, the tendency of in-situ single scattering albedo to decrease with decreasing aerosol extinction coefficient. The endgoal of the INSITU project is to identify specific

  6. Factorization in Color-Favored B Meson Decays to Charm

    CERN Document Server

    Luo, Z; Luo, Zumin; Rosner, Jonathan L.

    2001-01-01

    With the improvement of data on $B$ meson decays to various channels it has become possible to test more incisively some factorization predictions made a number of years ago. A concurrent benefit is the ability to constrain the Cabibbo-Kobayashi-Maskawa matrix element $V_{cb}$. Using a simultaneous fit to the rates for the color-favored decays $\\ob \\to D^{(*)+} \\pi^-$ and $\\ob \\to D^{(*)+} \\rho^-$ and to a differential distribution $d \\Gamma(\\ob \\to D^{*+} l^- \\bar \

  7. Coevolution of robustness, epistasis, and recombination favors asexual reproduction

    OpenAIRE

    MacCarthy, Thomas; Bergman, Aviv

    2007-01-01

    The prevalence of sexual reproduction remains one of the most perplexing phenomena in evolutionary biology. The deterministic mutation hypothesis postulates that sexual reproduction will be advantageous under synergistic epistasis, a condition in which mutations cause a greater reduction in fitness when combined than would be expected from their individual effects. The inverse condition, antagonistic epistasis, correspondingly is predicted to favor asexual reproduction. To assess this hypothe...

  8. Comparing Epileptiform Behavior of Mesoscale Detailed Models and Population Models of Neocortex

    NARCIS (Netherlands)

    Visser, Sid; Meijer, Hil G.E.; Lee, Hyong C.; Drongelen, van Wim; Putten, van Michel J.A.M; Gils, van Stephan A.

    2010-01-01

    Two models of the neocortex are developed to study normal and pathologic neuronal activity. One model contains a detailed description of a neocortical microcolumn represented by 656 neurons, including superficial and deep pyramidal cells, four types of inhibitory neurons, and realistic synaptic cont

  9. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  10. Menor valor: ¿oferta más favorable?

    Directory of Open Access Journals (Sweden)

    Iván Alirio Ramírez Rusinque

    2014-07-01

    Full Text Available La más importante transformación establecida por la Ley 1150 de 2007 en la contratación estatal radicó en la determinación del menor valor como la oferta más favorable para las entidades estatales, cuando se trate de la adquisición o suministro de bienes y servicios de características técnicas uniformes y de común utilización. Por ello, reviste de gran importancia realizar un análisis desde el punto de vista jurídico y normativo, que permita establecer si la mutación sufrida al concepto de oferta más favorable trajo consigo los fines perseguidos por la reforma, o en su defecto fue perjudicial la solución, al afectar el deber de selección objetiva, los principios de la contratación estatal de igualdad, eficacia, transparencia, planeación y equilibrio financiero del contrato y el principio constitucional de la libre competencia. Para de esta forma establecer si el modelo del menor valor como criterio de la oferta más favorable resulta eficiente jurídicamente en el campo de la contratación estatal, y si las reglas establecidas en la actualidad permiten el desarrollo armónico del sistema establecido por la Ley 1150 de 2007.

  11. Favorable results with syringosubarachnoid shunts for treatment of syringomyelia.

    Science.gov (United States)

    Tator, C H; Meguro, K; Rowed, D W

    1982-04-01

    From 1969 to 1979, 20 patients with syringomyelia were treated with a syringosubarachnoid shunt. The principal indications for this procedure were: significant progressive neurological deterioration and absent or minimal tonsillar ectopia. There were 15 patients with idiopathic syringomyelia, four with posttraumatic syringomyelia, and one with syringomyelia secondary to spinal arachnoiditis. The operations were performed with an operating microscope, and attention was directed to preserving thearachnoid membrane to ensure proper placement of the distal end of the shunt in an intact subarachnoid space. In all cases, a silicone rubber ventricular catheter was inserted into the syrinx through a posterior midline myelotomy. The average follow-up period was 5 years. A favorable result was obtained in 15 of the 20 patients (75%), including an excellent result with improvement of neurological deficit in 11 patients and a good result with cessation of progression in four patients. In the remaining five patients the result was poor with further progression of neurological deficit. A short duration of preoperative symptoms was usually a favorable prognostic feature. Four patients with a history of less than 6 months all had excellent results. Thirteen patients had a syringosubarachnoid shunt only, and all had good or excellent results. Seven patients had other surgical procedures, before, accompanying, or after shunt placement, and two had favorable results. Thus, the syringosubarachnoid shunt is an effective therapeutic modality for many patients with syringomyelia, particularly if there is little or no tonsillar herniation.

  12. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...... developed on the basis of the original PMV/SET models and consider the influence of occupants' expectations and human adaptive functions, including the extended PMV/SET models and the adaptive PMV/SET models. The results showed that when the indoor air velocity ranged from 0 to 0.2m/s and from 0.2 to 0.8m...

  13. Somatic mutations favorable to patient survival are predominant in ovarian carcinomas.

    Directory of Open Access Journals (Sweden)

    Wensheng Zhang

    Full Text Available Somatic mutation accumulation is a major cause of abnormal cell growth. However, some mutations in cancer cells may be deleterious to the survival and proliferation of the cancer cells, thus offering a protective effect to the patients. We investigated this hypothesis via a unique analysis of the clinical and somatic mutation datasets of ovarian carcinomas published by the Cancer Genome Atlas. We defined and screened 562 macro mutation signatures (MMSs for their associations with the overall survival of 320 ovarian cancer patients. Each MMS measures the number of mutations present on the member genes (except for TP53 covered by a specific Gene Ontology (GO term in each tumor. We found that somatic mutations favorable to the patient survival are predominant in ovarian carcinomas compared to those indicating poor clinical outcomes. Specially, we identified 19 (3 predictive MMSs that are, usually by a nonlinear dose-dependent effect, associated with good (poor patient survival. The false discovery rate for the 19 "positive" predictors is at the level of 0.15. The GO terms corresponding to these MMSs include "lysosomal membrane" and "response to hypoxia", each of which is relevant to the progression and therapy of cancer. Using these MMSs as features, we established a classification tree model which can effectively partition the training samples into three prognosis groups regarding the survival time. We validated this model on an independent dataset of the same disease (Log-rank p-value < 2.3 × 10(-4 and a dataset of breast cancer (Log-rank p-value < 9.3 × 10(-3. We compared the GO terms corresponding to these MMSs and those enriched with expression-based predictive genes. The analysis showed that the GO term pairs with large similarity are mainly pertinent to the proteins located on the cell organelles responsible for material transport and waste disposal, suggesting the crucial role of these proteins in cancer mortality.

  14. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent and expert-based (Analytic Hierarchy Process. We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  15. Modeling discourse management compared to other classroom management styles in university physics

    Science.gov (United States)

    Desbien, Dwain Michael

    2002-01-01

    A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.

  16. Comparing Habitat Suitability and Connectivity Modeling Methods for Conserving Pronghorn Migrations

    Science.gov (United States)

    Poor, Erin E.; Loucks, Colby; Jakes, Andrew; Urban, Dean L.

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements. PMID:23166656

  17. The use of discrete-event simulation modeling to compare handwritten and electronic prescribing systems.

    Science.gov (United States)

    Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim

    2013-01-01

    Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.

  18. Regional disaster impact analysis: comparing input-output and computable general equilibrium models

    Science.gov (United States)

    Koks, Elco E.; Carrera, Lorenzo; Jonkeren, Olaf; Aerts, Jeroen C. J. H.; Husby, Trond G.; Thissen, Mark; Standardi, Gabriele; Mysiak, Jaroslav

    2016-08-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of them in combination with noneconomic methods. While both IO and CGE models are widely used, they are mainly compared on theoretical grounds. Few studies have compared disaster impacts of different model types in a systematic way and for the same geographical area, using similar input data. Such a comparison is valuable from both a scientific and policy perspective as the magnitude and the spatial distribution of the estimated losses are born likely to vary with the chosen modelling approach (IO, CGE, or hybrid). Hence, regional disaster impact loss estimates resulting from a range of models facilitate better decisions and policy making. Therefore, this study analyses the economic consequences for a specific case study, using three regional disaster impact models: two hybrid IO models and a CGE model. The case study concerns two flood scenarios in the Po River basin in Italy. Modelling results indicate that the difference in estimated total (national) economic losses and the regional distribution of those losses may vary by up to a factor of 7 between the three models, depending on the type of recovery path. Total economic impact, comprising all Italian regions, is negative in all models though.

  19. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive gra

  20. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    Science.gov (United States)

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  1. A comparative study between EGB gravity and GTR by modelling compact stars

    CERN Document Server

    Bhar, Piyali; Sharma, Ranjan

    2016-01-01

    In this paper we utilise the Krori-Barua ansatz to model compact stars within the framework of Einstein- Gauss-Bonnet (EGB) gravity. The thrust of our investigation is to carry out a comparative analysis of the physical properties of our models in EGB and classical general relativity theory.

  2. A comparative study between EGB gravity and GTR by modeling compact stars

    Energy Technology Data Exchange (ETDEWEB)

    Bhar, Piyali [Government General Degree College, Department of Mathematics, Hooghly, West Bengal (India); Govender, Megan [Durban University of Technology, Department of Mathematics, Faculty of Applied Sciences, Durban (South Africa); Sharma, Ranjan [P. D. Women' s College, Department of Physics, Jalpaiguri (India)

    2017-02-15

    In this paper we utilise the Krori-Barua ansatz to model compact stars within the framework of Einstein-Gauss-Bonnet (EGB) gravity. The thrust of our investigation is to carry out a comparative analysis of the physical properties of our models in EGB and classical general relativity theory with the help of graphical representation. From our analysis we have shown that the central density and central pressure of EGB star model is higher than the GTR star model. The most notable feature is that for both GTR and the EGB star model the compactness factor crosses the Buchdahl (Phys Rev 116:1027, 1959) limit. (orig.)

  3. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  4. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  5. Comparative study on different models for estimation of direct normal irradiance (DNI) over Egypt atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Madkour, M.A.; Hamed, A.B. [Physics Department, Faculty of Science, Mansoura University (Egypt); El-Metwally, M. [Physics Department, Faculty of Education at Suez, Suez Canal University, Suez (Egypt)

    2006-03-01

    The results obtained by using seven-parameterization broadband models to estimate Direct Normal Irradiance (DNI) along with two spectral models for four sites in Egypt atmosphere were compared with ground DNI measurements. Some statistical indicators (MBE, RMSE and R{sup 2}) have been used to measure the performance of the used models. MBE for all dataset is <1% to both spectral models (SPCTRAL2, SMARTS2) and broadband models (MLWT1, MLWT2 and REST) while is equal to 1.2% to YANG model. However, RMSE are around 2% for spectral models and 3% to the broadband models. The error in prediction of DNI to such models is below experimental errors a part from the big number of observations. On the other hand, Louche, Dogniaux and Rodgers models provide relatively bad performance, RMSE are at most cases >4%. Determination coefficient (R{sup 2}) results to all models are near 1.0. If we excluded spectral models, the broadband models MLWT1, MLWT2 and REST along with YANG models provide the best performance in all tests, therefore, those models can be used in Egypt atmosphere. (author)

  6. Quantification of the accuracy of MRI generated 3D models of long bones compared to CT generated 3D models.

    Science.gov (United States)

    Rathnayaka, Kanchana; Momot, Konstantin I; Noser, Hansrudi; Volp, Andrew; Schuetz, Michael A; Sahama, Tony; Schmutz, Beat

    2012-04-01

    Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5 T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15 mm while the MRI-based models contained an average error of 0.23 mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

  7. A comparative study of the modeled effects of atrazine on aquatic plant communities in midwestern streams.

    Science.gov (United States)

    Nair, Shyam K; Bartell, Steven M; Brain, Richard A

    2015-11-01

    Potential effects of atrazine on the nontarget aquatic plants characteristic of lower-order streams in the Midwestern United States were previously assessed using the Comprehensive Aquatic System Model (CASMATZ ). Another similar bioenergetics-based, mechanistic model, AQUATOX, was examined in the present study, with 3 objectives: 1) to develop an AQUATOX model simulation similar to the CASMATZ model reference simulation in describing temporal patterns of biomass production by modeled plant populations, 2) to examine the implications of the different approaches used by the models in deriving plant community-based levels of concern (LOCs) for atrazine, and 3) to determine the feasibility of implementing alternative ecological models to assess ecological impacts of atrazine on lower-order Midwestern streams. The results of the present comparative modeling study demonstrated that a similar reference simulation to that from the CASMATZ model could be developed using the AQUATOX model. It was also determined that development of LOCs and identification of streams with exposures in excess of the LOCs were feasible with the AQUATOX model. Compared with the CASMATZ model results, however, the AQUATOX model consistently produced higher estimates of LOCs and generated non-monotonic variations of atrazine effects with increasing exposures. The results of the present study suggest an opportunity for harmonizing the treatments of toxicity and toxicity parameter estimation in the CASMATZ and the AQUATOX models. Both models appear useful in characterizing the potential impacts of atrazine on nontarget aquatic plant populations in lower-order Midwestern streams. The present model comparison also suggests that, with appropriate parameterization, these process-based models can be used to assess the potential effects of other xenobiotics on stream ecosystems.

  8. What can be learned from computer modeling? Comparing expository and modeling approaches to teaching dynamic systems behavior

    NARCIS (Netherlands)

    van Borkulo, S.P.|info:eu-repo/dai/nl/297554727; van Joolingen, W.R.; Savelsbergh, E.R.|info:eu-repo/dai/nl/17345853X; de Jong, T.

    2012-01-01

    Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment

  9. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  10. What can be learned from computer modeling? Comparing expository and modeling approaches to teaching dynamic systems behavior

    NARCIS (Netherlands)

    van Borkulo, S.P.; van Joolingen, W.R.; Savelsbergh, E.R.; de Jong, T.

    2012-01-01

    Computer modeling has been widely promoted as a means to attain higher order learning outcomes. Substantiating these benefits, however, has been problematic due to a lack of proper assessment tools. In this study, we compared computer modeling with expository instruction, using a tailored assessment

  11. Explaining Japan's Innovation and Trade: A Model of Quality Competition and Dynamic Comparative Advantage

    OpenAIRE

    Grossman, Gene M.

    1990-01-01

    In this paper, I develop a model of dynamic comparative advantage based on endogenous innovation. Firms devote resources to R&D in order to improve the quality of high-technology products. Research successes generate profit opportunities in the world market. The model predicts that a country such as Japan, with an abundance of skilled labor and scarcity of natural resources, will specialize relatively in industrial innovation and in the production of high-technology goods. I use the model to ...

  12. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    Directory of Open Access Journals (Sweden)

    Daminov Ildar

    2016-01-01

    Full Text Available This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  13. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  14. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  15. Does Favorable Selection Among Medicare Advantage Enrollees Affect Measurement of Hospital Readmission Rates?

    Science.gov (United States)

    Wong, Edwin S; Hebert, Paul L; Maciejewski, Matthew L; Perkins, Mark; Bryson, Chris L; Au, David H; Liu, Chuan-Fen

    2014-08-01

    Literature indicates favorable selection among Medicare Advantage (MA) enrollees compared with fee-for-service (FFS) enrollees. This study examined whether favorable selection into MA affected readmission rates among Medicare-eligible veterans following hospitalization for congestive heart failure in the Veterans Affairs Health System (VA). We measured total (VA + Medicare FFS) 30-day all-cause readmission rates across hospitals and all of VA. We used Heckman's correction to adjust readmission rates to be representative of all Medicare-eligible veterans, not just FFS-enrolled veterans. The adjusted all-cause readmission rate among FFS veterans was 27.1% (95% confidence interval [CI] = 26.5% to 27.7%), while the adjusted readmission rate among Medicare-eligible veterans was 25.3% (95% CI = 23.6% to 27.1%) after correcting for favorable selection. Readmission rate estimates among FFS veterans generalize to all Medicare-eligible veterans only after accounting for favorable selection into MA. Estimation of quality metrics should carefully consider sample selection to produce valid policy inferences.

  16. Comparative modeling and benchmarking data sets for human histone deacetylases and sirtuin families.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Kebede, Eyob Hailu; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-02-23

    Histone deacetylases (HDACs) are an important class of drug targets for the treatment of cancers, neurodegenerative diseases, and other types of diseases. Virtual screening (VS) has become fairly effective approaches for drug discovery of novel and highly selective histone deacetylase inhibitors (HDACIs). To facilitate the process, we constructed maximal unbiased benchmarking data sets for HDACs (MUBD-HDACs) using our recently published methods that were originally developed for building unbiased benchmarking sets for ligand-based virtual screening (LBVS). The MUBD-HDACs cover all four classes including Class III (Sirtuins family) and 14 HDAC isoforms, composed of 631 inhibitors and 24609 unbiased decoys. Its ligand sets have been validated extensively as chemically diverse, while the decoy sets were shown to be property-matching with ligands and maximal unbiased in terms of "artificial enrichment" and "analogue bias". We also conducted comparative studies with DUD-E and DEKOIS 2.0 sets against HDAC2 and HDAC8 targets and demonstrate that our MUBD-HDACs are unique in that they can be applied unbiasedly to both LBVS and SBVS approaches. In addition, we defined a novel metric, i.e. NLBScore, to detect the "2D bias" and "LBVS favorable" effect within the benchmarking sets. In summary, MUBD-HDACs are the only comprehensive and maximal-unbiased benchmark data sets for HDACs (including Sirtuins) that are available so far. MUBD-HDACs are freely available at http://www.xswlab.org/ .

  17. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  18. Comparative efficacy of oral meloxicam and phenylbutazone in 2 experimental pain models in the horse.

    Science.gov (United States)

    Banse, Heidi; Cribb, Alastair E

    2017-02-01

    The efficacy of oral phenylbutazone [PBZ; 4.4 mg/kg body weight (BW), q12h], a non-selective non-steroidal anti-inflammatory drug (NSAID), and oral meloxicam (MXM; 0.6 mg/kg BW, q24h), a COX-2 selective NSAID, were evaluated in 2 experimental pain models in horses: the adjustable heart bar shoe (HBS) model, primarily representative of mechanical pain, and the lipopolysaccharide-induced synovitis (SYN) model, primarily representative of inflammatory pain. In the HBS model, PBZ reduced multiple indicators of pain compared with the placebo and MXM. Meloxicam did not reduce indicators of pain relative to the placebo. In the SYN model, MXM and PBZ reduced increases in carpal skin temperature compared to the placebo. Meloxicam reduced lameness scores and lameness-induced changes in head movement compared to the placebo and PBZ. Phenylbutazone reduced lameness-induced change in head movement compared to the placebo. Overall, PBZ was more effective than MXM at reducing pain in the HBS model, while MXM was more effective at reducing pain in the SYN model at the oral doses used.

  19. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  20. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  1. Two models of nursing practice: a comparative study of motivational characteristics, work satisfaction and stress.

    Science.gov (United States)

    Rantanen, Anja; Pitkänen, Anneli; Paimensalo-Karell, Irmeli; Elovainio, Marko; Aalto, Pirjo

    2016-03-01

    To examine the differences in work-related motivational and stress factors between two nursing allocation models (the primary nursing model and the individual patient allocation model). A number of nursing allocation models are applied in hospital settings, but little is known about the potential associations between various models and work-related psychosocial profiles in nurses. A cross-sectional study using an electronic questionnaire. The data were collected from nurses (n = 643) working in 22 wards. In total, 317 questionnaires were returned (response rate 49.3%). There were no significant differences in motivational characteristics between the different models. The nurses working according to the individual patient allocation model were more satisfied with their supervisors. The work itself and turnover caused more stress to the nurses working in the primary nursing model, whereas patient-related stress was higher in the individual patient allocation model. No consistent evidence to support the use of either of these models over the other was found. Both these models have positive and negative features and more comparative research is required on various nursing practice models from different points of view. Nursing directors and ward managers should be aware of the positive and negative features of the various nursing models. © 2015 John Wiley & Sons Ltd.

  2. A quantitative approach for comparing modeled biospheric carbon flux estimates across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2010-10-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across

  3. A systematic approach for comparing modeled biospheric carbon fluxes across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2011-06-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied to systematically compare flux estimates despite the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that best explain the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1° × 1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that explain the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of the differences in spatial patterns of estimated flux can be linked back to differences in model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the systematic approach presented here provides a set of tools for comparing

  4. Towards a systemic functional model for comparing forms of discourse in academic writing Towards a systemic functional model for comparing forms of discourse in academic writing

    Directory of Open Access Journals (Sweden)

    Meriel Bloor

    2008-04-01

    Full Text Available This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered and the implications of the findings are related to models of text and discourse. Recommendations are made for developing domain models that relate clusters of features to positions on a cline. This article reports on research into the variation of texts across disciplines and considers the implications of this work for the teaching of writing. The research was motivated by the need to improve students’ academic writing skills in English and the limitations of some current pedagogic advice. The analysis compares Methods sections of research articles across four disciplines, including applied and hard sciences, on a cline, or gradient, termed slow to fast. The analysis considers the characteristics the texts share, but more importantly identifies the variation between sets of linguistic features. Working within a systemic functional framework, the texts are analysed for length, sentence length, lexical density, readability, grammatical metaphor, Thematic choice, as well as various rhetorical functions. Contextually relevant reasons for the differences are considered

  5. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    Science.gov (United States)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  6. Potato: A Favorable Crop for Plant Molecular Farming

    Institute of Scientific and Technical Information of China (English)

    Sunil Kumar G B; Ganapathi T R; Bapat V A

    2006-01-01

    Potato is one of the important food crops with a high yield potential and nutritional value. It has been used extensively for molecular farming to produce vaccines, antibodies and industrial enzymes. It has several desirable attributes as a favorable crop for the production of recombinant proteins. Potato tubers were employed for bulk production of recombinant antibodies. Vaccine production in potato has progressed to human clinical trials. Human milk proteins were successfully expressed in potato tubers. Potato hairy roots offer as another attractive system for the production of useful recombinant proteins both as intra cellular and secreted forms. This review describes the use of potato as a prospective host for plant molecular farming.

  7. CKM favored semileptonic decays of heavy hadrons at zero recoil

    CERN Document Server

    Yan, T M; Yu, H L; Yan, Tung Mow; Cheng, Hai Yang

    1996-01-01

    We study the properties of Cabibbo-Kobayashi-Maskawa (CKM) favored semileptonic decays of mesons and baryons containing a heavy quark at the point of no recoil. We first use a diagrammatic analysis to rederive the result observed by earlier authors that at this kinematic point the B meson decays via b\\to c transitions can only produce a D or D^* meson. The result is generalized to include photon emissions which violate heavy quark flavor symmetry. We show that photons emitted by the heavy quarks and the charged lepton are the only light particles that can decorate the decays \\bar{B}\\to D(D^*) + \\ell\

  8. Comparing global models of terrestrial net primary productivity (NPP): Global pattern and differentiation by major biomes

    Science.gov (United States)

    Kicklighter, D.W.; Bondeau, A.; Schloss, A.L.; Kaduk, J.; McGuire, A.D.

    1999-01-01

    Annual and seasonal net primary productivity estimates (NPP) of 15 global models across latitudinal zones and biomes are compared. The models simulated NPP for contemporary climate using common, spatially explicit data sets for climate, soil texture, and normalized difference vegetation index (NDVI). Differences among NPP estimates varied over space and time. The largest differences occur during the summer months in boreal forests (50??to 60??N) and during the dry seasons of tropical evergreen forests. Differences in NPP estimates are related to model assumptions about vegetation structure, model parameterizations, and input data sets.

  9. A comparative study on the flow over an airfoil using transitional turbulence models

    DEFF Research Database (Denmark)

    Lin, Mou; Sarlak Chivaee, Hamid

    2016-01-01

    This work addresses the simulation of the flow over NREL S826 airfoil under a relatively low Reynolds number (Re = 1 × 105 ) using the CFD solvers OpenFoam and ANSYS Fluent. The flow is simulated using two different transition models, γ − Reθ and k − kL − ω model, and the results are examined...... against the k − ω SST model without transitional formulations. By comparing the simulations with the available experimental data, we find that the using the transitional model can effectively improve the flow prediction, especially the drag coefficient results, before the stall....

  10. International consensus model for comparative assessment of chemical emissions in LCA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Bachmann, Till M.; Huijbregts, Mark A.J.;

    2008-01-01

    but as complex as needed) and transparent consensus model, USEtox, was created producing characterisation factors that fall within the range of factors from the harmonised existing characterisation models. The USEtox model together with factors for several thousand substances are currently under review to form......Under the UNEP-SETAC Life Cycle Initiative the six most commonly used characterisation models for toxic impacts from chemicals were compared and harmonised through a sequence of workshops removing differences which were unintentional or unnecessary. A parsimonious (as simple as possible...

  11. Modeling Central Carbon Metabolic Processes in Soil Microbial Communities: Comparing Measured With Modeled

    Science.gov (United States)

    Dijkstra, P.; Fairbanks, D.; Miller, E.; Salpas, E.; Hagerty, S.

    2013-12-01

    Understanding the mechanisms regulating C cycling is hindered by our inability to directly observe and measure the biochemical processes of glycolysis, pentose phosphate pathway, and TCA cycle in intact and complex microbial communities. Position-specific 13C labeled metabolic tracer probing is proposed as a new way to study microbial community energy production, biosynthesis, C use efficiency (the proportion of substrate incorporated into microbial biomass), and enables the quantification of C fluxes through the central C metabolic network processes (Dijkstra et al 2011a,b). We determined the 13CO2 production from U-13C, 1-13C, 2-13C, 3-13C, 4-13C, 5-13C, and 6-13C labeled glucose and 1-13C and 2,3-13C pyruvate in parallel incubations in three soils along an elevation gradient. Qualitative and quantitative interpretation of the results indicate a high pentose phosphate pathway activity in soils. Agreement between modeled and measured CO2 production rates for the six C-atoms of 13C-labeled glucose indicate that the metabolic model used is appropriate for soil community processes, but that improvements can be made. These labeling and modeling techniques may improve our ability to analyze the biochemistry and (eco)physiology of intact microbial communities. Dijkstra, P., Blankinship, J.C., Selmants, P.C., Hart, S.C., Koch, G.W., Schwartz, E., Hungate, B.A., 2011a. Probing C flux patterns of soil microbial metabolic networks using parallel position-specific tracer labeling. Soil Biology & Biochemistry 43, 126-132. Dijkstra, P., Dalder, J.J., Selmants, P.C., Hart, S.C., Koch, G.W., Schwartz, E., Hungate, B.A., 2011b. Modeling soil metabolic processes using isotopologue pairs of position-specific 13C-labeled glucose and pyruvate. Soil Biology & Biochemistry 43, 1848-1857.

  12. Comparative performance of Bayesian and AIC-based measures of phylogenetic model uncertainty.

    Science.gov (United States)

    Alfaro, Michael E; Huelsenbeck, John P

    2006-02-01

    Reversible-jump Markov chain Monte Carlo (RJ-MCMC) is a technique for simultaneously evaluating multiple related (but not necessarily nested) statistical models that has recently been applied to the problem of phylogenetic model selection. Here we use a simulation approach to assess the performance of this method and compare it to Akaike weights, a measure of model uncertainty that is based on the Akaike information criterion. Under conditions where the assumptions of the candidate models matched the generating conditions, both Bayesian and AIC-based methods perform well. The 95% credible interval contained the generating model close to 95% of the time. However, the size of the credible interval differed with the Bayesian credible set containing approximately 25% to 50% fewer models than an AIC-based credible interval. The posterior probability was a better indicator of the correct model than the Akaike weight when all assumptions were met but both measures performed similarly when some model assumptions were violated. Models in the Bayesian posterior distribution were also more similar to the generating model in their number of parameters and were less biased in their complexity. In contrast, Akaike-weighted models were more distant from the generating model and biased towards slightly greater complexity. The AIC-based credible interval appeared to be more robust to the violation of the rate homogeneity assumption. Both AIC and Bayesian approaches suggest that substantial uncertainty can accompany the choice of model for phylogenetic analyses, suggesting that alternative candidate models should be examined in analysis of phylogenetic data. [AIC; Akaike weights; Bayesian phylogenetics; model averaging; model selection; model uncertainty; posterior probability; reversible jump.].

  13. COMPARATIVE ANALYSIS BETWEEN THE TRADITIONAL MODEL OF CORPORATE GOVERNANCE AND ISLAMIC MODEL

    Directory of Open Access Journals (Sweden)

    DAN ROXANA LOREDANA

    2016-08-01

    Full Text Available Corporate governance represents a set of processes and policies by which a company is administered, controlled and directed to achieve the predetermined management objectives settled by the shareholders. The most important benefits of the corporate governance to the organisations are related to business success, investor confidence and minimisation of wastage. For business, the improved controls and decision-making will aid corporate success as well as growth in revenues and profits. For the investor confidence, corporate governance will mean that investors are more likely to trust that the company is being well run. This will not only make it easier and cheaper for the company to raise finance, but also has a positive effect on the share price. When we talk about the minimisation of wastage we relate to the strong corporate governance that should help to minimise waste within the organisation, as well as the corruption, risks and mismanagement. Thus, in our research, we are trying to determine the common elements, and also, the differences that have occured between two well known models of corporate governance, the traditional Anglo – Saxon model and also, the Islamic model of corporate governance.

  14. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    Science.gov (United States)

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  15. Comparing discrete fracture and continuum models to predict contaminant transport in fractured porous media.

    Science.gov (United States)

    Blessent, Daniela; Jørgensen, Peter R; Therrien, René

    2014-01-01

    We used the FRAC3Dvs numerical model (Therrien and Sudicky 1996) to compare the dual-porosity (DP), equivalent porous medium (EPM), and discrete fracture matrix diffusion (DFMD) conceptual models to predict field-scale contaminant transport in a fractured clayey till aquitard. The simulations show that the DP, EPM, and DFMD models could be equally well calibrated to reproduce contaminant breakthrough in the till aquitard for a base case. In contrast, when groundwater velocity and degradation rates are modified with respect to the base case, the DP method simulated contaminant concentrations up to three orders of magnitude different from those calculated by the DFMD model. In previous simulations of well-characterized column experiments, the DFMD method reproduced observed changes in solute transport for a range of flow and transport conditions comparable to those of the field-scale simulations, while the DP and EPM models required extensive recalibration to avoid high magnitude errors in predicted mass transport. The lack of robustness with respect to variable flow and transport conditions suggests that DP models and effective porosity EPM models have limitations for predicting cause-effect relationships in environmental planning. The study underlines the importance of obtaining well-characterized experimental data for further studies and evaluation of model key process descriptions and model suitability. © 2013, National Groundwater Association.

  16. Disregarding RBE variation in treatment plan comparison may lead to bias in favor of proton plans.

    Science.gov (United States)

    Wedenberg, Minna; Toma-Dasu, Iuliana

    2014-09-01

    Currently in proton radiation therapy, a constant relative biological effectiveness (RBE) equal to 1.1 is assumed. The purpose of this study is to evaluate the impact of disregarding variations in RBE on the comparison of proton and photon treatment plans. Intensity modulated treatment plans using photons and protons were created for three brain tumor cases with the target situated close to organs at risk. The proton plans were optimized assuming a standard RBE equal to 1.1, and the resulting linear energy transfer (LET) distribution for the plans was calculated. In the plan evaluation, the effect of a variable RBE was studied. The RBE model used considers the RBE variation with dose, LET, and the tissue specific parameter α/β of photons. The plan comparison was based on dose distributions, DVHs and normal tissue complication probabilities (NTCPs). Under the assumption of RBE=1.1, higher doses to the tumor and lower doses to the normal tissues were obtained for the proton plans compared to the photon plans. In contrast, when accounting for RBE variations, the comparison showed lower doses to the tumor and hot spots in organs at risk in the proton plans. These hot spots resulted in higher estimated NTCPs in the proton plans compared to the photon plans. Disregarding RBE variations might lead to suboptimal proton plans giving lower effect in the tumor and higher effect in normal tissues than expected. For cases where the target is situated close to structures sensitive to hot spot doses, this trend may lead to bias in favor of proton plans in treatment plan comparisons.

  17. Favorable prognosis of female patients with nasopharyngeal carcinoma

    Institute of Scientific and Technical Information of China (English)

    Xing Lu; Fei-Li Wang; Xiang Guo; Lin Wang; Hai-Bo Zhang; Wei-Xiong Xia; Si-Wei Li

    2013-01-01

    The female sex is traditionally considered a favorable prognostic factor for nasopharyngeal carcinoma (NPC).However,no particular study has reported this phenomenon.To explore the prognostic impact of gender on patients with NPC after definitive radiotherapy,we reviewed the clinical data of 2,063 consecutive patients treated between 1st January 2000 and 31st December 2003 in the Sun Yat-sen University Cancer Center.The median follow-up for the whole series was 81 months.The female and male patients with early stage disease comprised 49.4% and 28.1% of the patient population,respectively.Both the 5-year overall survival (OS) and disease-specific survival (DSS) rates of female patients were significantly higher than those of male patients (OS:79% vs.69%,P < 0.001; DSS:81% vs.70%,P < 0.001).For patients with Iocoregionally advanced NPC,the 5-year OS and DSS rates of female vs.male patients were 74% vs.63% (P < 0.001) and 76% vs.64%,respectively (P < 0.001).A multivariate analysis showed that gender,age,and TNM stage were independent prognostic factors for the 5-year OS and DSS of NPC patients.The favorable prognosis of female patients is not only attributed to the early diagnosis and treatment but might also be attributed to some intrinsic factors of female patients.

  18. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models

    DEFF Research Database (Denmark)

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...... to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon...... the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either...

  19. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  20. Comparing Ray-Based and Wave-Based Models of Cross-Beam Energy Transfer

    Science.gov (United States)

    Follett, R. K.; Edgell, D. H.; Shaw, J. G.; Froula, D. H.; Myatt, J. F.

    2016-10-01

    Ray-based models of cross-beam energy transfer (CBET) are used in radiation-hydrodynamics codes to calculate laser-energy deposition. The accuracy of ray-based CBET models is limited by assumptions about the polarization and phase of the interacting laser beams and by the use of a paraxial Wentzel-Kramers-Brillouin (WKB) approximation. A 3-D wave-based solver (LPSE-CBET) is used to study the nonlinear interaction between overlapping laser beams in underdense plasma. A ray-based CBET model is compared to the wave-based model and shows good agreement in simple geometries where the assumptions of the ray-based model are satisfied. Near caustic surfaces, the assumptions of the ray-based model break down and the calculated energy transfer deviates from wave-based calculations. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  1. Model predictions of copper speciation in coastal water compared to measurements by analytical voltammetry.

    Science.gov (United States)

    Ndungu, Kuria

    2012-07-17

    Trace metal toxicity to aquatic biota is highly dependent on the metaĺs chemical speciation. Accordingly, metal speciation is being incorporated in to water quality criteria and toxicity regulations using the Biotic Ligand Model (BLM) but there are currently no BLM for biota in marine and estuarine waters. In this study, I compare copper speciation measurements in a typical coastal water made using Competitive ligand exchange-adsorptive cathodic stripping voltammetry (CLE-ACSV) to model calculations using Visual MINTEQ. Both Visual MINTEQ and BLM use similar programs to model copper interactions with dissolved organic matter-DOM (i.e., the Stockholm Humic Model and WHAM-Windermere Humic Aqueous Model, respectively). The total dissolved (14). The modeled [Cu2+] could be fitted to the experimental values better after the conditional stability constant for copper binding to fulvic acid (FA) complexes in DOM in the SHM was adjusted to account for higher concentration of strong Cu-binding sites in FA.

  2. Comparative testing of dark matter models with 9 HSB and 9 LSB galaxies

    CERN Document Server

    Kun, E; Keresztes, Z; Gergely, L Á

    2016-01-01

    We ensemble a database of 9 high-surface brightness (HSB) and 9 low-surface brightness (LSB) galaxies, for which both surface brightness density and spectroscopic rotation curve data are available in the literature, and are representative for the various morphologies. We use this dataset for a comparative testing of the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When dark matter component is necessary to explain the spectroscopic rotational curves, we rank the models according to the goodness of fitting to the datasets. We construct the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. An axissymetric, baryonic mass model with variable axis ratios and three dark matter models are employed to fit the theoretical rotational velocity curves to the...

  3. Comparative systems biology between human and animal models based on next-generation sequencing methods.

    Science.gov (United States)

    Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei

    2013-04-01

    Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.

  4. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  5. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Directory of Open Access Journals (Sweden)

    Buljit Buragohain, Sankar Chakma, Peeush Kumar, Pinakeswar Mahanta, Vijayanand S. Moholkar

    2013-01-01

    Full Text Available Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  6. A Basic Protein Comparative Three-Dimensional Modeling Methodological Workflow Theory and Practice.

    Science.gov (United States)

    Bitar, Mainá; Franco, Glória Regina

    2014-01-01

    When working with proteins and studying its properties, it is crucial to have access to the three-dimensional structure of the molecule. If experimentally solved structures are not available, comparative modeling techniques can be used to generate useful protein models to subsidize structure-based research projects. In recent years, with Bioinformatics becoming the basis for the study of protein structures, there is a crescent need for the exposure of details about the algorithms behind the softwares and servers, as well as a need for protocols to guide in silico predictive experiments. In this article, we explore different steps of the comparative modeling technique, such as template identification, sequence alignment, generation of candidate structures and quality assessment, its peculiarities and theoretical description. We then present a practical step-by-step workflow, to support the Biologist on the in silico generation of protein structures. Finally, we explore further steps on comparative modeling, presenting perspectives to the study of protein structures through Bioinformatics. We trust that this is a thorough guide for beginners that wish to work on the comparative modeling of proteins.

  7. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  8. Explaining Japan's Innovation and Trade: A model of Quality Competition and Dynamic Comparive Advantage

    OpenAIRE

    Grossman, Gene M.

    1989-01-01

    In this paper, I develop a model of dynamic comparative advantage based on endogenous innovation. Firms in each of two countries devote resources to R&D in order to improve the quality of high-technology products. Research successes generate profit opportunities in the world market. The model predicts that a country such as Japan, with abundance of skilled labor and scarcity of natural resources, will specialize relatively in industrial innovation and in the production of high-technology good...

  9. Enhancing prediction power of chemometric models through manipulation of the fed spectrophotometric data: A comparative study

    Science.gov (United States)

    Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed

    2016-10-01

    Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.

  10. Human experts' and a fuzzy model's predictions of outcomes of scoliosis treatment: a comparative analysis.

    Science.gov (United States)

    Chalmers, Eric; Pedrycz, Witold; Lou, Edmond

    2015-03-01

    Brace treatment is the most commonly used nonsurgical treatment for adolescents with idiopathic scoliosis. However, brace treatment is not always successful and the factors influencing its success are not completely clear. This makes treatment outcome difficult to predict. A computer model which can accurately predict treatment outcomes could potentially provide valuable treatment recommendations. This paper describes a fuzzy system that includes a prediction model and a decision support engine. The model was constructed using conditional fuzzy c-means clustering to discover patterns in retrospective patient data. The model's ability to predict treatment outcome was compared to the ability of eight Scoliosis experts. The model and experts each predicted treatment outcome retrospectively for 28 braced patients, and these predictions were compared to the actual outcomes. The model outperformed all but one expert individually and performed similarly to the experts as a group. These results suggest that the fuzzy model is capable of providing meaningful treatment recommendations. This study offers the first model for this application whose performance has been shown to be at or above the human expert level.

  11. Comparing model ensembles in an event attribution study of 2012 West African rainfall

    Science.gov (United States)

    Parker, Hannah; Lott, Fraser C.; Cornforth, Rosalind J.

    2016-04-01

    In 2012, heavy rainfall resulted in flooding and devastating impacts across West Africa. With many people highly vulnerable to such events in this region, here we investigate whether anthropogenic climate change has influenced such heavy precipitation events. We use a probabilistic event attribution approach to assess the contribution of anthropogenic greenhouse gas emissions, by comparing the probability of such an event occurring in climate model simulations with all known climate forcings to those where natural forcings only are simulated. An ensemble of simulations from 10 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared to two much larger ensembles of atmosphere-only simulations, from the Met Office model HadGEM3-A and from climateprediction.net (a regional version of HadAM3P). These are used to assess whether the choice of model ensemble influences the attribution statement that can be made. Results show that anthropogenic greenhouse gas emissions have decreased the probability of high precipitation, although the magnitude and confidence intervals of the decrease depend on the model ensemble used. The influences of significant teleconnections are then removed from the CMIP5 ensemble to see how this influences the results and compares with the atmosphere-only ensembles.

  12. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    Science.gov (United States)

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  13. The potent and selective α4β2*/α6*-nicotinic acetylcholine receptor partial agonist 2-[5-[5-((S)Azetidin-2-ylmethoxy)-3-pyridinyl]-3-isoxazolyl]ethanol demonstrates antidepressive-like behavior in animal models and a favorable ADME-tox profile.

    Science.gov (United States)

    Yu, Li-Fang; Brek Eaton, J; Zhang, Han-Kun; Sabath, Emily; Hanania, Taleen; Li, Guan-Nan; van Breemen, Richard B; Whiteaker, Paul; Liu, Qiang; Wu, Jie; Chang, Yong-Chang; Lukas, Ronald J; Brunner, Dani; Kozikowski, Alan P

    2014-04-01

    Preclinical and clinical studies demonstrated that the inhibition of cholinergic supersensitivity through nicotinic antagonists and partial agonists can be used successfully to treat depressed patients, especially those who are poor responders to selective serotonin reuptake inhibitors (SSRIs). In our effort to develop novel antidepressant drugs, LF-3-88 was identified as a potent nicotinic acetylcholine receptor (nAChR) partial agonist with subnanomolar to nanomolar affinities for β2-containing nAChRs (α2β2, α3β2, α4β2, and α4β2*) and superior selectivity away from α3β4 - (K i > 10(4) nmol/L) and α7-nAChRs (K i > 10(4) nmol/L) as well as 51 other central nervous system (CNS)-related neurotransmitter receptors and transporters. Functional activities at different nAChR subtypes were characterized utilizing (86)Rb(+) ion efflux assays, two-electrode voltage-clamp (TEVC) recording in oocytes, and whole-cell current recording measurements. In mouse models, administration of LF-3-88 resulted in antidepressive-like behavioral signatures 15 min post injection in the SmartCube® test (5 and 10 mg/kg, i.p.; about 45-min session), decreased immobility in the forced swim test (1-3 mg/kg, i.p.; 1-10 mg/kg, p.o.; 30 min pretreatment, 6-min trial), and decreased latency to approach food in the novelty-suppressed feeding test after 29 days chronic administration once daily (5 mg/kg but not 10 mg/kg, p.o.; 15-min trial). In addition, LF-3-88 exhibited a favorable profile in pharmacokinetic/ADME-Tox (absorption, distribution, metabolism, excretion, and toxicity) assays. This compound was also shown to cause no mortality in wild-type Balb/CJ mice when tested at 300 mg/kg. These results further support the potential of potent and selective nicotinic partial agonists for use in the treatment of depression.

  14. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    Full Text Available BACKGROUND: Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models. PRINCIPAL FINDINGS: Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation. CONCLUSIONS: The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features

  15. A Comparative Study on Sand Transport Modeling for Horizontal Multiphase Pipeline

    Directory of Open Access Journals (Sweden)

    Kan Wai Choong

    2014-02-01

    Full Text Available Presence of sand causes adverse effects on hydrocarbon production, pipeline erosion and problems at wellbore. If the problems persist, production may be stopped and delayed. This imposes workover cost. Hence, operating expenses increase and revenue reduces. There is no explicit calculation algorithm for sand transportation modeling readily available in flow simulators. Therefore, this study aims to develop an Excel-based spreadsheet on sand transportation to predict sand critical velocity and onset of sand deposition based on published literature. The authors reviewed nine sand transportation models in pipelines and made comparisons on the selected models based on various criteria. Four of which were then developed into a sand modeling spreadsheet. The four models are the Turian et al. (1987, Oudeman (1993, Stevenson et al. (2002b Model and Danielson (2007. The spreadsheet presently focuses on sand production prediction in horizontal two-phase flow. The Danielson model can predict sand hold up while the other models estimate grain size transportable and critical velocity of sand. Flowing pipeline properties, sand properties and results of simulations like using OLGA (for flow rate, velocity and superficial velocity of different phases are necessary inputs of the spreadsheet. A user selects any model based on different operating conditions or user preference. The spreadsheet was validated by comparing data extracted from the research papers. Sensitivity analyses can also be performed with the spreadsheet by manipulating the parameters such as grain size and flow rate. This review is useful for flow simulators’ development to include sand transport modeling.

  16. HEADING RECOVERY FROM OPTIC FLOW: COMPARING PERFORMANCE OF HUMANS AND COMPUTATIONAL MODELS

    Directory of Open Access Journals (Sweden)

    Andrew John Foulkes

    2013-06-01

    Full Text Available Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame and quality (amount of 2D directional noise of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.

  17. Comparative analysis of system identification techniques for nonlinear modeling of the neuron-microelectrode junction.

    Science.gov (United States)

    Khan, Saad Ahmad; Thakore, Vaibhav; Behal, Aman; Bölöni, Ladislau; Hickman, James J

    2013-03-01

    Applications of non-invasive neuroelectronic interfacing in the fields of whole-cell biosensing, biological computation and neural prosthetic devices depend critically on an efficient decoding and processing of information retrieved from a neuron-electrode junction. This necessitates development of mathematical models of the neuron-electrode interface that realistically represent the extracellular signals recorded at the neuroelectronic junction without being computationally expensive. Extracellular signals recorded using planar microelectrode or field effect transistor arrays have, until now, primarily been represented using linear equivalent circuit models that fail to reproduce the correct amplitude and shape of the signals recorded at the neuron-microelectrode interface. In this paper, to explore viable alternatives for a computationally inexpensive and efficient modeling of the neuron-electrode junction, input-output data from the neuron-electrode junction is modeled using a parametric Wiener model and a Nonlinear Auto-Regressive network with eXogenous input trained using a dynamic Neural Network model (NARX-NN model). Results corresponding to a validation dataset from these models are then employed to compare and contrast the computational complexity and efficiency of the aforementioned modeling techniques with the Lee-Schetzen technique of cross-correlation for estimating a nonlinear dynamic model of the neuroelectronic junction.

  18. Comparing the effects of climate and impact model uncertainty on climate impacts estimates for grain maize

    Science.gov (United States)

    Holzkämper, Annelie; Honti, Mark; Fuhrer, Jürg

    2015-04-01

    Crop models are commonly applied to estimate impacts of projected climate change and to anticipate suitable adaptation measures. Thereby, uncertainties from global climate models, regional climate models, and impacts models cascade down to impact estimates. It is essential to quantify and understand uncertainties in impact assessments in order to provide informed guidance for decision making in adaptation planning. A question that has hardly been investigated in this context is how sensitive climate impact estimates are to the choice of the impact model approach. In a case study for Switzerland we compare results of three different crop modelling approaches to assess the relevance of impact model choice in relation to other uncertainty sources. The three approaches include an expert-based, a statistical and a process-based model. With each approach impact model parameter uncertainty and climate model uncertainty (originating from climate model chain and downscaling approach) are accounted for. ANOVA-based uncertainty partitioning is performed to quantify the relative importance of different uncertainty sources. Results suggest that uncertainty in estimated yield changes originating from the choice of the crop modelling approach can be greater than uncertainty from climate model chains. The uncertainty originating from crop model parameterization is small in comparison. While estimates of yield changes are highly uncertain, the directions of estimated changes in climatic limitations are largely consistent. This leads us to the conclusion that by focusing on estimated changes in climate limitations, more meaningful information can be provided to support decision making in adaptation planning - especially in cases where yield changes are highly uncertain.

  19. Comparative study of three commonly used continuous deterministic methods for modeling gene regulation networks

    Directory of Open Access Journals (Sweden)

    Dubitzky Werner

    2010-09-01

    Full Text Available Abstract Background A gene-regulatory network (GRN refers to DNA segments that interact through their RNA and protein products and thereby govern the rates at which genes are transcribed. Creating accurate dynamic models of GRNs is gaining importance in biomedical research and development. To improve our understanding of continuous deterministic modeling methods employed to construct dynamic GRN models, we have carried out a comprehensive comparative study of three commonly used systems of ordinary differential equations: The S-system (SS, artificial neural networks (ANNs, and the general rate law of transcription (GRLOT method. These were thoroughly evaluated in terms of their ability to replicate the reference models' regulatory structure and dynamic gene expression behavior under varying conditions. Results While the ANN and GRLOT methods appeared to produce robust models even when the model parameters deviated considerably from those of the reference models, SS-based models exhibited a notable loss of performance even when the parameters of the reverse-engineered models corresponded closely to those of the reference models: this is due to the high number of power terms in the SS-method, and the manner in which they are combined. In cross-method reverse-engineering experiments the different characteristics, biases and idiosynchracies of the methods were revealed. Based on limited training data, with only one experimental condition, all methods produced dynamic models that were able to reproduce the training data accurately. However, an accurate reproduction of regulatory network features was only possible with training data originating from multiple experiments under varying conditions. Conclusions The studied GRN modeling methods produced dynamic GRN models exhibiting marked differences in their ability to replicate the reference models' structure and behavior. Our results suggest that care should be taking when a method is chosen for a

  20. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.

    1985-01-01

    Flat-Earth and spherical-Earth geopotential modeling of crustal anomaly sources at satellite elevations are compared by computing gravity and scalar magnetic anomalies perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Results indicate that the error caused by the flat-Earth approximation is less than 10% in most geometric conditions. Generally, error increase with larger and wider anomaly sources at higher altitudes. For most crustal source modeling applications at conventional satellite altitudes, flat-Earth modeling can be justified and is numerically efficient.

  1. Waterflooding performance using Dykstra-Parsons as compared with numerical model performance

    Energy Technology Data Exchange (ETDEWEB)

    Mobarak, S.

    1975-01-01

    Multilayered models have been used by a number of investigators to represent heterogeneous reservoirs. The purpose of this note is to present waterflood performance for multilayered systems using the standard Dykstra-Parsons method as method as compared with that predicted by the modified form using equations given and those obtained by using a numerical model. The predicted oil recovery, using Johnson charts or the standard Dykstra-Parsons recovery modulus chart is always conservative, if not overly pessimistic. The modified Dykstra-Parsons method, as explained in the text, shows good agreement with the numerical model.

  2. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  3. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2014-07-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  4. Training models in laparoscopy: a systematic review comparing their effectiveness in learning surgical skills.

    Science.gov (United States)

    Willaert, W; Van De Putte, D; Van Renterghem, K; Van Nieuwenhove, Y; Ceelen, W; Pattyn, P

    2013-01-01

    Surgery has traditionally been learned on patients in the operating room, which is time-consuming, can have an impact on the patient outcomes, and is of variable effectiveness. As a result, surgical training models have been developed, which are compared in this systematic review. We searched Pubmed, CENTRAL, and Science Citation index expanded for randomised clinical trials and randomised cross-over studies comparing laparoscopic training models. Studies comparing one model with no training were also included. The reference list of identified trials was searched for further relevant studies. Fifty-eight trials evaluating several training forms and involving 1591 participants were included (four studies with a low risk of bias). Training (virtual reality (VR) or video trainer (VT)) versus no training improves surgical skills in the majority of trials. Both VR and VT are as effective in most studies. VR training is superior to traditional laparoscopic training in the operating room. Outcome results for VR robotic simulations versus robot training show no clear difference in effectiveness for either model. Only one trial included human cadavers and observed better results versus VR for one out of four scores. Contrasting results are observed when robotic technology is compared with manual laparoscopy. VR training and VT training are valid teaching models. Practicing on these models similarly improves surgical skills. A combination of both methods is recommended in a surgical curriculum. VR training is superior to unstructured traditional training in the operating room. The reciprocal effectiveness of the other models to learn surgical skills has not yet been established.

  5. New opioid prescribing guidelines favor non-opioid alternatives.

    Science.gov (United States)

    2016-05-01

    Determined to make a dent in the growing problem of opioid addiction, the CDC has unveiled new guidelines for opioid prescribing for chronic pain. The recommendations urge providers to be more judicious in their prescribing, opting for opioids only after carefully weighing substantial risks and benefits. Public health authorities note the rampant use and misuse of opioids have "blurred the lines" between prescription opioids and illicit opioids. The new guidelines are designed to help frontline providers balance the need to manage their patients' chronic pain with the duty to curb dangerous prescribing practices. The recommendations are built around three principles: favor non-opioid alternatives for most cases of chronic pain, use the lowest effective dose when prescribing opioids, and exercise caution/monitor patients who are treated with opioids.

  6. Habitat heterogeneity favors asexual reproduction in natural populations of grassthrips.

    Science.gov (United States)

    Lavanchy, Guillaume; Strehler, Marie; Llanos Roman, Maria Noemi; Lessard-Therrien, Malie; Humbert, Jean-Yves; Dumas, Zoé; Jalvingh, Kirsten; Ghali, Karim; Fontcuberta García-Cuenca, Amaranta; Zijlstra, Bart; Arlettaz, Raphaël; Schwander, Tanja

    2016-08-01

    Explaining the overwhelming success of sex among eukaryotes is difficult given the obvious costs of sex relative to asexuality. Different studies have shown that sex can provide benefits in spatially heterogeneous environments under specific conditions, but whether spatial heterogeneity commonly contributes to the maintenance of sex in natural populations remains unknown. We experimentally manipulated habitat heterogeneity for sexual and asexual thrips lineages in natural populations and under seminatural mesocosm conditions by varying the number of hostplants available to these herbivorous insects. Asexual lineages rapidly replaced the sexual ones, independently of the level of habitat heterogeneity in mesocosms. In natural populations, the success of sexual thrips decreased with increasing habitat heterogeneity, with sexual thrips apparently only persisting in certain types of hostplant communities. Our results illustrate how genetic diversity-based mechanisms can favor asexuality instead of sex when sexual lineages co-occur with genetically variable asexual lineages.

  7. Discriminative accuracy of genomic profiling comparing multiplicative and additive risk models.

    Science.gov (United States)

    Moonesinghe, Ramal; Khoury, Muin J; Liu, Tiebin; Janssens, A Cecile J W

    2011-02-01

    Genetic prediction of common diseases is based on testing multiple genetic variants with weak effect sizes. Standard logistic regression and Cox Proportional Hazard models that assess the combined effect of multiple variants on disease risk assume multiplicative joint effects of the variants, but this assumption may not be correct. The risk model chosen may affect the predictive accuracy of genomic profiling. We investigated the discriminative accuracy of genomic profiling by comparing additive and multiplicative risk models. We examined genomic profiles of 40 variants with genotype frequencies varying from 0.1 to 0.4 and relative risks varying from 1.1 to 1.5 in separate scenarios assuming a disease risk of 10%. The discriminative accuracy was evaluated by the area under the receiver operating characteristic curve. Predicted risks were more extreme at the lower and higher risks for the multiplicative risk model compared with the additive model. The discriminative accuracy was consistently higher for multiplicative risk models than for additive risk models. The differences in discriminative accuracy were negligible when the effect sizes were small (risk genotypes were common or when they had stronger effects. Unraveling the exact mode of biological interaction is important when effect sizes of genetic variants are moderate at the least, to prevent the incorrect estimation of risks.

  8. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-01-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performed...

  9. Comparative Analysis of Photogrammetric Methods for 3D Models for Museums

    DEFF Research Database (Denmark)

    Hafstað Ármannsdottir, Unnur Erla; Antón Castro, Francesc/François; Mioc, Darka

    2014-01-01

    to 3D models using Sketchup and Designing Reality. Finally, panoramic photography is discussed as a 2D alternative to 3D. Sketchup is a free-ware 3D drawing program and Designing Reality is a commercial program, which uses Structure from motion. For each program/method, the same comparative analysis...

  10. Earnings Profiles of Department Heads: Comparing Cross-Sectional and Panel Models.

    Science.gov (United States)

    Ragan, James F., Jr.; Rehman, Qazi Najeeb

    1996-01-01

    A cross-sectional study of 842 faculty who served as department heads between 1965-92 was compared with 170 in a panel study for whom earnings were estimated using a personal effects model. The average chair received a 12% wage premium for administrative service. Skill depreciation was most severe and wage growth most adversely affected in the…

  11. Comparing risk attitudes of organic and non-organic farmers with a Bayesian random coefficient model

    NARCIS (Netherlands)

    Gardebroek, C.

    2006-01-01

    Organic farming is usually considered to be more risky than conventional farming, but the risk aversion of organic farmers compared with that of conventional farmers has not been studied. Using a non-structural approach to risk estimation, a Bayesian random coefficient model is used to obtain indivi

  12. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups

    NARCIS (Netherlands)

    Harst, van der E.J.M.; Potting, J.; Kroeze, C.

    2014-01-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as

  13. Feeding Behavior of Aplysia: A Model System for Comparing Cellular Mechanisms of Classical and Operant Conditioning

    Science.gov (United States)

    Baxter, Douglas A.; Byrne, John H.

    2006-01-01

    Feeding behavior of Aplysia provides an excellent model system for analyzing and comparing mechanisms underlying appetitive classical conditioning and reward operant conditioning. Behavioral protocols have been developed for both forms of associative learning, both of which increase the occurrence of biting following training. Because the neural…

  14. A Comparative Analysis of Spatial Visualization Ability and Drafting Models for Industrial and Technology Education Students

    Science.gov (United States)

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2014-01-01

    The main purpose of this study was to determine significant positive effects among the use of three different types of drafting models, and to identify whether any differences exist towards promotion of spatial visualization ability for students in Industrial Technology and Technology Education courses. In particular, the study compared the use of…

  15. Comparing Three Methods to Create Multilingual Phone Models for Vocabulary Independent Speech Recognition Tasks

    Science.gov (United States)

    2000-08-01

    Glass, et al.: Multilingual Spoken Language Under- ( multilingual clusters) and 5280 monolingual clusters. This standing in the MIT VOYAGER System...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010392 TITLE: Comparing Three Methods to Create Multilingual Phone...METHODS TO CREATE MULTILINGUAL PHONE MODELS FOR VOCABULARY INDEPENDENT SPEECH RECOGNITION TASKS Joachim Kdhler German National Research Center for

  16. Comparative Genre Analysis on RA Introductions of Computer Science Using CARS Model

    Institute of Scientific and Technical Information of China (English)

    杨文静

    2013-01-01

    Using Swales’CARS model, the present study combines the methods of quantitative analysis and qualitative analysis and makes a comparative analysis on the moves and steps of RA introductions between Chinese English journals and international English journals, which will benefit researchers in RA writing and publishing in core English journals.

  17. Using ROC curves to compare neural networks and logistic regression for modeling individual noncatastrophic tree mortality

    Science.gov (United States)

    Susan L. King

    2003-01-01

    The performance of two classifiers, logistic regression and neural networks, are compared for modeling noncatastrophic individual tree mortality for 21 species of trees in West Virginia. The output of the classifier is usually a continuous number between 0 and 1. A threshold is selected between 0 and 1 and all of the trees below the threshold are classified as...

  18. The Development of Community-Based Health Information Exchanges: A Comparative Assessment of Organizational Models

    Science.gov (United States)

    Champagne, Tiffany

    2013-01-01

    The purpose of this dissertation research was to critically examine the development of community-based health information exchanges (HIEs) and to comparatively analyze the various models of exchanges in operation today nationally. Specifically this research sought to better understand several aspects of HIE: policy influences, organizational…

  19. Comparing Infants' Preference for Correlated Audiovisual Speech with Signal-Level Computational Models

    Science.gov (United States)

    Hollich, George; Prince, Christopher G.

    2009-01-01

    How much of infant behaviour can be accounted for by signal-level analyses of stimuli? The current paper directly compares the moment-by-moment behaviour of 8-month-old infants in an audiovisual preferential looking task with that of several computational models that use the same video stimuli as presented to the infants. One type of model…

  20. Comparative genome-scale metabolic modeling of actinomycetes : The topology of essential core metabolism

    NARCIS (Netherlands)

    Alam, Mohammad Tauqeer; Medema, Marnix H.; Takano, Eriko; Breitling, Rainer; Gojobori, Takashi

    2011-01-01

    Actinomycetes are highly important bacteria. On one hand, some of them cause severe human and plant diseases, on the other hand, many species are known for their ability to produce antibiotics. Here we report the results of a comparative analysis of genome-scale metabolic models of 37 species of act

  1. Comparative genome-scale metabolic modeling of actinomycetes: the topology of essential core metabolism.

    NARCIS (Netherlands)

    Alam, M.T.; Medema, M.H.; Takano, E.; Breitling, R.

    2011-01-01

    Actinomycetes are highly important bacteria. On one hand, some of them cause severe human and plant diseases, on the other hand, many species are known for their ability to produce antibiotics. Here we report the results of a comparative analysis of genome-scale metabolic models of 37 species of act

  2. Comparative genome-scale metabolic modeling of actinomycetes: the topology of essential core metabolism.

    NARCIS (Netherlands)

    Alam, M.T.; Medema, M.H.; Takano, E.; Breitling, R.

    2011-01-01

    Actinomycetes are highly important bacteria. On one hand, some of them cause severe human and plant diseases, on the other hand, many species are known for their ability to produce antibiotics. Here we report the results of a comparative analysis of genome-scale metabolic models of 37 species of

  3. Comparative genome-scale metabolic modeling of actinomycetes : The topology of essential core metabolism

    NARCIS (Netherlands)

    Alam, Mohammad Tauqeer; Medema, Marnix H.; Takano, Eriko; Breitling, Rainer; Gojobori, Takashi

    2011-01-01

    Actinomycetes are highly important bacteria. On one hand, some of them cause severe human and plant diseases, on the other hand, many species are known for their ability to produce antibiotics. Here we report the results of a comparative analysis of genome-scale metabolic models of 37 species of

  4. Comparative analysis for traffic flow forecasting models with real-life data in Beijing

    Directory of Open Access Journals (Sweden)

    Yaping Rong

    2015-12-01

    Full Text Available Rational traffic flow forecasting is essential to the development of advanced intelligent transportation systems. Most existing research focuses on methodologies to improve prediction accuracy. However, applications of different forecast models have not been adequately studied yet. This research compares the performance of three representative prediction models with real-life data in Beijing. They are autoregressive integrated moving average, neutral network, and nonparametric regression. The results suggest that nonparametric regression significantly outperforms the other models. With Wilcoxon signed-rank test, the root mean square errors and the error distribution reveal that the nonparametric regression model experiences superior accuracy. In addition, the nonparametric regression model exhibits the best spatial-transferred application effect.

  5. Comparative Analysis of Empirical Path Loss Model for Cellular Transmission in Rivers State

    Directory of Open Access Journals (Sweden)

    B.O.H Akinwole, Biebuma J.J

    2013-08-01

    Full Text Available This paper presents a comparative analysis of three empirical path loss models with measured data for urban, suburban, and rural areas in Rivers State. The three models investigated were COST 231 Hata, SUI,ECC-33models. A downlink data was collected at operating frequency of 2100MHz using drive test procedure consisting of test mobile phones to determine the received signal power (RSCP at specified receiver distanceson a Globacom Node Bs located in some locations in the State. This test was carried out for investigating the effectiveness of the commonly used existing models for Cellular transmission. The results analysed were based on Mean Square Error (MSE and Standard Deviation (SD and were simulated on MATLAB (7.5.0. The results show that COST 231 Hata model gives better predictions and therefore recommended for path loss predictions in River State.

  6. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    Science.gov (United States)

    Haas, K.A.; Warner, J.C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales. ?? 2008 Elsevier Ltd.

  7. Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?

    Science.gov (United States)

    Morris, David; Macko, Stephen

    2016-04-01

    Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.

  8. Comparing factor analytic models of the DSM-IV personality disorders.

    Science.gov (United States)

    Huprich, Steven K; Schmitt, Thomas A; Richard, David C S; Chelminski, Iwona; Zimmerman, Mark A

    2010-01-01

    There is little agreement about the latent factor structure of the Diagnostic and Statistical Manual of Mental Disorders (DSM) personality disorders (PDs). Factor analytic studies over the past 2 decades have yielded different results, in part reflecting differences in factor analytic technique, the measure used to assess the PDs, and the changing DSM criteria. In this study, we explore the latent factor structure of the DSM (4th ed.; IV) PDs in a sample of 1200 psychiatric outpatients evaluated with the Structured Interview for DSM-IV PDs (B. Pfohl, N. Blum, & M. Zimmerman, 1997). We first evaluated 2 a priori models of the PDs with confirmatory factor analysis (CFA), reflecting their inherent organization in the DSM-IV: a 3-factor model and a 10-factor model. Fit statistics did not suggest that these models yielded an adequate fit. We then evaluated the latent structure with exploratory factor analysis (EFA). Multiple solutions produced more statistically and theoretically reasonable results, as well as providing clinically useful findings. On the basis of fit statistics and theory, 3 models were evaluated further--the 4-, 5-, and 10-factor models. The 10-factor model, which did not resemble the 10-factor model of the CFA, was determined to be the strongest of all 3 models. Future research should use contemporary methods of evaluating factor analytic results in order to more thoroughly compare various factor solutions.

  9. Comparing population recovery after insecticide exposure for four aquatic invertebrate species using models of different complexity.

    Science.gov (United States)

    Baveco, J M Hans; Norman, Steve; Roessink, Ivo; Galic, Nika; Van den Brink, Paul J

    2014-07-01

    Population models, in particular individual-based models (IBMs), are becoming increasingly important in chemical risk assessment. They can be used to assess recovery of spatially structured populations after chemical exposure that varies in time and space. The authors used an IBM coupled to a toxicokinetic-toxicodynamic model, the threshold damage model (TDM), to assess recovery times for 4 aquatic organisms, after insecticide application, in a nonseasonal environment and in 3 spatial settings (pond, stream, and ditch). The species had different life histories (e.g., voltinism, reproductive capacity, mobility). Exposure was derived from a pesticide fate model, following standard European Union scenarios. The results of the IBM-TDM were compared with results from simpler models: one in which exposure was linked to effects by means of concentration-effect relationships (IBM-CE) and one in which the IBM was replaced by a nonspatial, logistic growth model (logistic). For the first, exposure was based on peak concentrations only; for the second, exposure was spatially averaged as well. By using comparisons between models of different complexity and species with different life histories, the authors obtained an understanding of the role spatial processes play in recovery and the conditions under which the full time-varying exposure needs to be considered. The logistic model, which is amenable to an analytic approach, provided additional insights into the sensitivity of recovery times to density dependence and spatial dimensions. © 2014 SETAC.

  10. Comparative study of hybrid RANS-LES models for separated flows

    Science.gov (United States)

    Kumar, G.; Lakshmanan, S. K.; Gopalan, H.; De, A.

    2016-06-01

    Hybrid RANS-LES models are proven to be capable of predicting massively separated flows with reasonable computation cost. In this paper, Spalart-Allmaras (S-A) based detached eddy simulation (DES) model and three SST based hybrid models with different RANS to LES switching criteriaare investigated. The flow over periodic hill at Re = 10,595 is chosen as the benchmark for comparing the performance of the different models due to the complex flow physics and reasonablecomputational cost. The model performances are evaluated based on their prediction capabilities of velocity and stress profiles, and separation and reattachment point. The simulated results are validatedagainst experimental and numerical results available in literature. The S-A DES model predicted separation bubble accurately at the top of the hill, as reported earlier in experiments and other numerical results. This model also correctly predicted velocity and stress profiles in recirculation region. However, the performance of this model was poor in the post reattachment region. On the other hand, the k-ω SST based hybrid models performed poorly in recirculation region, but it fairly predicted stress profiles in post reattachment region.

  11. Comparing mixing-length models of the diabatic wind profile over homogeneous terrain

    Science.gov (United States)

    Peña, Alfredo; Gryning, Sven-Erik; Hasager, Charlotte Bay

    2010-05-01

    Models of the diabatic wind profile over homogeneous terrain for the entire atmospheric boundary layer are developed using mixing-length theory and are compared to wind speed observations up to 300 m at the National Test Station for Wind Turbines at Høvsøre, Denmark. The measurements are performed within a wide range of atmospheric stability conditions, which allows a comparison of the models with the average wind profile computed in seven stability classes, showing a better agreement than compared to the traditional surface-layer wind profile. The wind profile is measured by combining cup anemometer and lidar observations, showing good agreement at the overlapping heights. The height of the boundary layer, a parameter required for the wind profile models, is estimated under neutral and stable conditions using surface-layer turbulence measurements, and under unstable conditions based on the aerosol backscatter profile from ceilometer observations.

  12. A comparative analysis of radiobiological models for cell surviving fractions at high doses.

    Science.gov (United States)

    Andisheh, B; Edgren, M; Belkić, Dž; Mavroidis, P; Brahme, A; Lind, B K

    2013-04-01

    For many years the linear-quadratic (LQ) model has been widely used to describe the effects of total dose and dose per fraction at low-to-intermediate doses in conventional fractionated radiotherapy. Recent advances in stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) have increased the interest in finding a reliable cell survival model, which will be accurate at high doses, as well. Different models have been proposed for improving descriptions of high dose survival responses, such as the Universal Survival Curve (USC), the Kavanagh-Newman (KN) and several generalizations of the LQ model, e.g. the Linear-Quadratic-Linear (LQL) model and the Pade Linear Quadratic (PLQ) model. The purpose of the present study is to compare a number of models in order to find the best option(s) which could successfully be used as a fractionation correction method in SRT. In this work, six independent experimental data sets were used: CHOAA8 (Chinese hamster fibroblast), H460 (non-small cell lung cancer, NSLC), NCI-H841 (small cell lung cancer, SCLC), CP3 and DU145 (human prostate carcinoma cell lines) and U1690 (SCLC). By detailed comparisons with these measurements, the performance of nine different radiobiological models was examined for the entire dose range, including high doses beyond the shoulder of the survival curves. Using the computed and measured cell surviving fractions, comparison of the goodness-of-fit for all the models was performed by means of the reduced χ (2)-test with a 95% confidence interval. The obtained results indicate that models with dose-independent final slopes and extrapolation numbers generally represent better choices for SRT. This is especially important at high doses where the final slope and extrapolation numbers are presently found to play a major role. The PLQ, USC and LQL models have the least number of shortcomings at all doses. The extrapolation numbers and final slopes of these models do not depend on dose. Their asymptotes

  13. The experience of freedom in decisions - Questioning philosophical beliefs in favor of psychological determinants.

    Science.gov (United States)

    Lau, Stephan; Hiemisch, Anette; Baumeister, Roy F

    2015-05-01

    Six experiments tested two competing models of subjective freedom during decision-making. The process model is mainly based on philosophical conceptions of free will and assumes that features of the process of choosing affect subjective feelings of freedom. In contrast, the outcome model predicts that subjective freedom is due to positive outcomes that can be expected or are achieved by a decision. Results heavily favored the outcome model over the process model. For example, participants felt freer when choosing between two equally good than two equally bad options. Process features including number of options, complexity of decision, uncertainty, having the option to defer the decision, conflict among reasons, and investing high effort in choosing generally had no or even negative effects on subjective freedom. In contrast, participants reported high freedom with good outcomes and low freedom with bad outcomes, and ease of deciding increased subjective freedom, consistent with the outcome model. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Comparing rainfall variability, model complexity and hydrological response at the intra-event scale

    Science.gov (United States)

    Cristiano, Elena; ten Veldhuis, Marie-claire; Ochoa-Rodriguez, Susana; van de Giesen, Nick

    2017-04-01

    represents the surface with a dense 2D mesh, based on a high resolution Digital Elevation Map. Nine storm events measured by a dual polarimetric X-Band weather radar, located in Cabauw (CAESAR weather station, NL) were used, with original resolution of 100mx100m in space and 1min in time. Results show that the FD model presents a slightly higher sensitivity to spatial rainfall variability than the SD1 and SD2 model. Model resolution, however, seems to have a small impact on the sensitivity of model outcomes compared to rainfall variability: intensity and intermittency, as well as spatial range and velocity, have a higher influence than model configuration.

  15. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  16. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  17. A comparative analysis of three vector-borne diseases across Australia using seasonal and meteorological models

    Science.gov (United States)

    Stratton, Margaret D.; Ehrlich, Hanna Y.; Mor, Siobhan M.; Naumova, Elena N.

    2017-01-01

    Ross River virus (RRV), Barmah Forest virus (BFV), and dengue are three common mosquito-borne diseases in Australia that display notable seasonal patterns. Although all three diseases have been modeled on localized scales, no previous study has used harmonic models to compare seasonality of mosquito-borne diseases on a continent-wide scale. We fit Poisson harmonic regression models to surveillance data on RRV, BFV, and dengue (from 1993, 1995 and 1991, respectively, through 2015) incorporating seasonal, trend, and climate (temperature and rainfall) parameters. The models captured an average of 50-65% variability of the data. Disease incidence for all three diseases generally peaked in January or February, but peak timing was most variable for dengue. The most significant predictor parameters were trend and inter-annual periodicity for BFV, intra-annual periodicity for RRV, and trend for dengue. We found that a Temperature Suitability Index (TSI), designed to reclassify climate data relative to optimal conditions for vector establishment, could be applied to this context. Finally, we extrapolated our models to estimate the impact of a false-positive BFV epidemic in 2013. Creating these models and comparing variations in periodicities may provide insight into historical outbreaks as well as future patterns of mosquito-borne diseases.

  18. A comparative analysis of three vector-borne diseases across Australia using seasonal and meteorological models

    Science.gov (United States)

    Stratton, Margaret D.; Ehrlich, Hanna Y.; Mor, Siobhan M.; Naumova, Elena N.

    2017-01-01

    Ross River virus (RRV), Barmah Forest virus (BFV), and dengue are three common mosquito-borne diseases in Australia that display notable seasonal patterns. Although all three diseases have been modeled on localized scales, no previous study has used harmonic models to compare seasonality of mosquito-borne diseases on a continent-wide scale. We fit Poisson harmonic regression models to surveillance data on RRV, BFV, and dengue (from 1993, 1995 and 1991, respectively, through 2015) incorporating seasonal, trend, and climate (temperature and rainfall) parameters. The models captured an average of 50–65% variability of the data. Disease incidence for all three diseases generally peaked in January or February, but peak timing was most variable for dengue. The most significant predictor parameters were trend and inter-annual periodicity for BFV, intra-annual periodicity for RRV, and trend for dengue. We found that a Temperature Suitability Index (TSI), designed to reclassify climate data relative to optimal conditions for vector establishment, could be applied to this context. Finally, we extrapolated our models to estimate the impact of a false-positive BFV epidemic in 2013. Creating these models and comparing variations in periodicities may provide insight into historical outbreaks as well as future patterns of mosquito-borne diseases. PMID:28071683

  19. Geochemical modeling of diagenetic reactions in Snorre Field reservoir sandstones: a comparative study of computer codes

    Directory of Open Access Journals (Sweden)

    Marcos Antonio Klunk

    Full Text Available ABSTRACTDiagenetic reactions, characterized by the dissolution and precipitation of minerals at low temperatures, control the quality of sedimentary rocks as hydrocarbon reservoirs. Geochemical modeling, a tool used to understand diagenetic processes, is performed through computer codes based on thermodynamic and kinetic parameters. In a comparative study, we reproduced the diagenetic reactions observed in Snorre Field reservoir sandstones, Norwegian North Sea. These reactions had been previously modeled in the literature using DISSOL-THERMAL code. In this study, we modeled the diagenetic reactions in the reservoirs using Geochemist's Workbench (GWB and TOUGHREACT software, based on a convective-diffusive-reactive model and on the thermodynamic and kinetic parameters compiled for each reaction. TOUGHREACT and DISSOL-THERMAL modeling showed dissolution of quartz, K-feldspar and plagioclase in a similar temperature range from 25 to 80°C. In contrast, GWB modeling showed dissolution of albite, plagioclase and illite, as well as precipitation of quartz, K-feldspar and kaolinite in the same temperature range. The modeling generated by the different software for temperatures of 100, 120 and 140°C showed similarly the dissolution of quartz, K-feldspar, plagioclase and kaolinite, but differed in the precipitation of albite and illite. At temperatures of 150 and 160°C, GWB and TOUGHREACT produced different results from the DISSOL-THERMAL, except for the dissolution of quartz, plagioclase and kaolinite. The comparative study allows choosing the numerical modeling software whose results are closer to the diagenetic reactions observed in the petrographic analysis of the modeled reservoirs.

  20. Comparing ESC and iPSC—Based Models for Human Genetic Disorders

    Directory of Open Access Journals (Sweden)

    Tomer Halevy

    2014-10-01

    Full Text Available Traditionally, human disorders were studied using animal models or somatic cells taken from patients. Such studies enabled the analysis of the molecular mechanisms of numerous disorders, and led to the discovery of new treatments. Yet, these systems are limited or even irrelevant in modeling multiple genetic diseases. The isolation of human embryonic stem cells (ESCs from diseased blastocysts, the derivation of induced pluripotent stem cells (iPSCs from patients’ somatic cells, and the new technologies for genome editing of pluripotent stem cells have opened a new window of opportunities in the field of disease modeling, and enabled studying diseases that couldn’t be modeled in the past. Importantly, despite the high similarity between ESCs and iPSCs, there are several fundamental differences between these cells, which have important implications regarding disease modeling. In this review we compare ESC-based models to iPSC-based models, and highlight the advantages and disadvantages of each system. We further suggest a roadmap for how to choose the optimal strategy to model each specific disorder.

  1. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Science.gov (United States)

    Pawlus, Witold; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir

    2016-06-01

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and - in some cases - troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  2. A comparative study of variant turbulence modeling in the physical behaviors of diesel spray combustion

    Directory of Open Access Journals (Sweden)

    Amini Behnaz

    2011-01-01

    Full Text Available In this research, the performance of non-linear k-ε turbulence model in resolving the time delay between mean flow changes and its proportionate turbulent dissipation rate adjustment was investigated. For this purpose, the ability of Launder-Spalding linear, Suga non-linear, Yakhot RNG and Rietz modified RNG k-ε models are compared in the estimation of axial mean velocity profile and turbulent integral length scale evolution during engine compression stroke. Computed results showed that even though all the models can predict the acceptable results for velocity profile, for turbulent integral length scale curve, non-linear model is in a good agreement with modified RNG model prediction that depicts correspondence with experimental reported data, while other models show a different unrealistic behaviors. Also after combustion starts and piston is expanding, non-linear model can predicts actual manner for integral length scale while linear one cannot. It is concluded that, physical behavior of turbulence models characteristics should be ascertained before being successfully applied to simulate complex flow fields like internal combustion engines.

  3. Comparing flow-through and static ice cave models for Shoshone Ice Cave

    Directory of Open Access Journals (Sweden)

    Kaj E. Williams

    2015-05-01

    Full Text Available In this paper we suggest a new ice cave type: the “flow-through” ice cave. In a flow-through ice cave external winds blow into the cave and wet cave walls chill the incoming air to the wet-bulb temperature, thereby achieving extra cooling of the cave air. We have investigated an ice cave in Idaho, located in a lava tube that is reported to have airflow through porous wet end-walls and could therefore be a flow-through cave. We have instrumented the site and collected data for one year. In order to determine the actual ice cave type present at Shoshone, we have constructed numerical models for static and flow-through caves (dynamic is not relevant here. The models are driven with exterior measurements of air temperature, relative humidity and wind speed. The model output is interior air temperature and relative humidity. We then compare the output of both models to the measured interior air temperatures and relative humidity. While both the flow-through and static cave models are capable of preserving ice year-round (a net zero or positive ice mass balance, both models show very different cave air temperature and relative humidity output. We find the empirical data support a hybrid model of the static and flow-through models: permitting a static ice cave to have incoming air chilled to the wet-bulb temperature fits the data best for the Shoshone Ice Cave.

  4. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2014-02-01

    Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.

  5. Comparing Process-Based Net Primary Productivity Models in a Mediterranean Watershed

    Science.gov (United States)

    Donmez, C.; Berberoglu, S.; Forrest, M.; Cilek, A.; Hickler, T.

    2013-10-01

    The aim of this study was to compare the estimation capability of two different process-based NPP models (CASA and LPJGUESS) in a Mediterranean watershed. Remotely sensed data and climate time series (temperature, precipitation and solar radiation) were input to these models in the example of Goksu River Basin which is located in the Eastern Mediterranean Part of Turkey. The comparison of these models was based on output variables. These variables were divided into three groups; (i) spatiallyinterpolated total NPP estimations, (ii) NPP distribution of land cover classes, (iii) annual and monthly based NPP variations. Different model approaches were evaluated within their capability to prove the relationship between annual / monthly NPP and major climatic variables. The effect of vegetation distribution on the accuracy of models was examined. The uncertainities of the CASA and LPJ-GUESS model were evaluated by incorporating remotely sensed data, percent tree cover and ground measurements. The differences between model outputs were guided to enhance modelling strategies by means of remotely sensed data and other input parameters.

  6. The handicap process favors exaggerated, rather than reduced, sexual ornaments.

    Science.gov (United States)

    Tazzyman, Samuel J; Iwasa, Yoh; Pomiankowski, Andrew

    2014-09-01

    Why are traits that function as secondary sexual ornaments generally exaggerated in size compared to the naturally selected optimum, and not reduced? Because they deviate from the naturally selected optimum, traits that are reduced in size will handicap their bearer, and could thus provide an honest signal of quality to a potential mate. Thus if secondary sexual ornaments evolve via the handicap process, current theory suggests that reduced ornamentation should be as frequent as exaggerated ornamentation, but this is not the case. To try to explain this discrepancy, we analyze a simple model of the handicap process. Our analysis shows that asymmetries in costs of preference or ornament with regard to exaggeration and reduction cannot fully explain the imbalance. Rather, the bias toward exaggeration can be best explained if either the signaling efficacy or the condition dependence of a trait increases with size. Under these circumstances, evolution always leads to more extreme exaggeration than reduction: although the two should occur just as frequently, exaggerated secondary sexual ornaments are likely to be further removed from the naturally selected optimum than reduced ornaments.

  7. Verhulst and stochastic models for comparing mechanisms of MAb productivity in six CHO cell lines.

    Science.gov (United States)

    Shirsat, Nishikant; Avesh, Mohd; English, Niall J; Glennon, Brian; Al-Rubeai, Mohamed

    2016-08-01

    The present study validates previously published methodologies-stochastic and Verhulst-for modelling the growth and MAb productivity of six CHO cell lines grown in batch cultures. Cytometric and biochemical data were used to model growth and productivity. The stochastic explanatory models were developed to improve our understanding of the underlying mechanisms of growth and productivity, whereas the Verhulst mechanistic models were developed for their predictability. The parameters of the two sets of models were compared for their biological significance. The stochastic models, based on the cytometric data, indicated that the productivity mechanism is cell specific. However, as shown before, the modelling results indicated that G2 + ER indicate high productivity, while G1 + ER indicate low productivity, where G1 and G2 are the cell cycle phases and ER is Endoplasmic Reticulum. In all cell lines, growth proved to be inversely proportional to the cumulative G1 time (CG1T) for the G1 phase, whereas productivity was directly proportional to ER. Verhulst's rule, "the lower the intrinsic growth factor (r), the higher the growth (K)," did not hold for growth across all cell lines but held good for the cell lines with the same growth mechanism-i.e., r is cell specific. However, the Verhulst productivity rule, that productivity is inversely proportional to the intrinsic productivity factor (r x ), held well across all cell lines in spite of differences in their mechanisms for productivity-that is, r x is not cell specific. The productivity profile, as described by Verhulst's logistic model, is very similar to the Michaelis-Menten enzyme kinetic equation, suggesting that productivity is more likely enzymatic in nature. Comparison of the stochastic and Verhulst models indicated that CG1T in the cytometric data has the same significance as r, the intrinsic growth factor in the Verhulst models. The stochastic explanatory and the Verhulst logistic models can explain the

  8. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  9. Comparing the effects of supernovae feedback models on the interstellar medium

    Science.gov (United States)

    Byrne, Lindsey; Christensen, Charlotte; Keller, Benjamin W.

    2017-01-01

    Stellar feedback affects the state of the interstellar medium and plays an important role in the formation of galaxies. However, different ways of modeling that feedback lead to different galaxy morphologies even when using the same initial conditions. We investigated the differences between two models of supernovae feedback, blastwave feedback and superbubble feedback, using a smoothed particle hydrodynamics code to simulate the formation of an isolated galaxy. The two feedback models were compared across three different models of the ISM: primordial cooling, metal-line cooling, and metal-line cooling in addition to molecular hydrogen. The simulations run with metal-line cooling indicate that superbubble feedback creates a greater amount of high-density gas than blastwave feedback does while also regulating star formation more efficiently. Galaxies produced with metal-line cooling or H2 physics created cold, dense gas, and the increased cooling efficiency was also linked to more pronounced spiral structure.

  10. Comparative study of neuroprotective effect of tricyclics vs. trazodone on animal model of depressive disorder.

    Science.gov (United States)

    Marinescu, Ileana P; Predescu, Anca; Udriştoiu, T; Marinescu, D

    2012-01-01

    The neurobiological model of depressive disorder may be correlated with the animal model on rat, hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis, the increase of cortisol level being specific to the model of depression in women. The neurobiological model of depression in women presents vulnerabilities for some cerebral structures (hippocampus, frontal cortex, cerebral amygdala). A decrease of frontal cortex and hippocampus volumes are recognized in depressive disorder in women, depending on duration of disease and antidepressant therapy. Neurobiological vulnerability may be pronounced through cholinergic blockade. The purpose of the study was to highlight the cytoarchitectural changes in the frontal cortex and hippocampus by comparing two antidepressant substances: amitriptyline with a strong anticholinergic effect and trazodone, without anticholinergic effect. The superior neuroprotective qualities of trazodone for the frontal cortex, hippocampus and dentate gyrus are revealed. The particular neurobiological vulnerability of depression in women requires a differentiated therapeutic approach, avoiding the use of antidepressants with anticholinergic action.

  11. Comparative analysis of regression and artificial neural network models for wind speed prediction

    Science.gov (United States)

    Bilgili, Mehmet; Sahin, Besir

    2010-11-01

    In this study, wind speed was modeled by linear regression (LR), nonlinear regression (NLR) and artificial neural network (ANN) methods. A three-layer feedforward artificial neural network structure was constructed and a backpropagation algorithm was used for the training of ANNs. To get a successful simulation, firstly, the correlation coefficients between all of the meteorological variables (wind speed, ambient temperature, atmospheric pressure, relative humidity and rainfall) were calculated taking two variables in turn for each calculation. All independent variables were added to the simple regression model. Then, the method of stepwise multiple regression was applied for the selection of the “best” regression equation (model). Thus, the best independent variables were selected for the LR and NLR models and also used in the input layer of the ANN. The results obtained by all methods were compared to each other. Finally, the ANN method was found to provide better performance than the LR and NLR methods.

  12. Comparing photo modeling methodologies and techniques: the instance of the Great Temple of Abu Simbel

    Directory of Open Access Journals (Sweden)

    Sergio Di Tondo

    2013-10-01

    Full Text Available After fifty years from the Salvage of the Abu Simbel Temples it has been possible to experiment the contemporary photo-modeling tools beginning from the original data of the photogrammetrical survey carried out in the 1950s. This produced a reflection on “Image Based” methods and modeling techniques, comparing strict 3d digital photogrammetry with the latest Structure From Motion (SFM systems. The topographic survey data, the original photogrammetric stereo couples, the points coordinates and their representation in contour lines, allowed to obtain a model of the monument in his configuration before the moving of the temples. The impossibility to carry out a direct survey led to touristic shots to create SFM models to use for geometric comparisons.

  13. Assessing monthly average solar radiation models: a comparative case study in Turkey.

    Science.gov (United States)

    Sonmete, Mehmet H; Ertekin, Can; Menges, Hakan O; Hacıseferoğullari, Haydar; Evrendilek, Fatih

    2011-04-01

    Solar radiation data are required by solar engineers, architects, agriculturists, and hydrologists for many applications such as solar heating, cooking, drying, and interior illumination of buildings. In order to achieve this, numerous empirical models have been developed all over the world to predict solar radiation. The main objective of this study is to examine and compare 147 solar radiation models available in the literature for the prediction of monthly solar radiation at Ankara (Turkey) based on selected statistical measures such as percentage error, mean percentage error, root mean square error, mean bias error, and correlation coefficient. Our results showed that Ball et al. (Agron J 96:391-397, 2004) model and Chen et al. (Energy Convers Manag 47:2859-2866, 2006) model performed best in the estimation of solar radiation on a horizontal surface for Ankara.

  14. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  15. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  16. Cold Nuclear Matter effects on J/psi production at RHIC: comparing shadowing models

    Energy Technology Data Exchange (ETDEWEB)

    Ferreiro, E.G.; /Santiago de Compostela U.; Fleuret, F.; /Ecole Polytechnique; Lansberg, J.P.; /SLAC; Rakotozafindrabe, A.; /SPhN, DAPNIA, Saclay

    2009-06-19

    We present a wide study on the comparison of different shadowing models and their influence on J/{psi} production. We have taken into account the possibility of different partonic processes for the c{bar c}-pair production. We notice that the effect of shadowing corrections on J/{psi} production clearly depends on the partonic process considered. Our results are compared to the available data on dAu collisions at RHIC energies. We try different break up cross section for each of the studied shadowing models.

  17. Distribution of [sup 32]Si in the world ocean: Model compared to observation

    Energy Technology Data Exchange (ETDEWEB)

    Peng, T.H. (Oak Ridge National Lab., TN (United States)); Maier-Reimer, E. (Max-Planck Institut fuer Meteorologie, Hamburg (Germany)); Broecker, W.S. (Columbia Univ., Palisades, NY (United States))

    1993-06-01

    The most difficult measurement of the GEOSECS survey was that of silicon 32. This paper compares the observed distribution from the survey with the distributions predicted from two global ocean models. Existing measurements show little surface to bottom or ocean to ocean variations where as the models predict three to five fold greater ratios in the deep Atlantic Ocean than in the deep Pacific and Indian oceans. A flaw in the measurements is one possibility for the discrepancy. 9 refs., 8 figs., 5 tabs.

  18. Assessment and Challenges of Ligand Docking into Comparative Models of G-Protein Coupled Receptors

    DEFF Research Database (Denmark)

    Nguyen, E.D.; Meiler, J.; Norn, C.;

    2013-01-01

    The rapidly increasing number of high-resolution X-ray structures of G-protein coupled receptors (GPCRs) creates a unique opportunity to employ comparative modeling and docking to provide valuable insight into the function and ligand binding determinants of novel receptors, to assist in virtual...... screening and to design and optimize drug candidates. However, low sequence identity between receptors, conformational flexibility, and chemical diversity of ligands present an enormous challenge to molecular modeling approaches. It is our hypothesis that rapid Monte-Carlo sampling of protein backbone...

  19. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    Science.gov (United States)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  20. Considerations for comparing radiation-induced chromosome aberration data with predictions from biophysical models

    Science.gov (United States)

    Wu, H.; Furusawa, Y.; George, K.; Kawata, T.; Cucinotta, F.

    Biophysical models addressing the formation of radiation-induced chromosome aberrations are usually based on the assumption that chromosome aberrations are formed by DNA double strand break (DSB) misrejoining, via either the homologous or the non-homologous repair pathway. However, comparing chromosome aberration data with model predictions is not always straightforward. In this paper we discuss some of the aspects that must be considered to make these comparisons meaningful. Firstly, biophysical models are usually applied to DSB rejoining and misrejoining in the G0/G1 phase of the cell cycle, while most chromosome aberration data reported in the literature are analyzed in metaphase. Since cells must progress through the cell cycle check points in order to reach mitosis, model predictions that differ from the metaphase chromosome analysis may actually agree with the aberration data in chromosomes collected in interphase. Secondly, high- LET radiation generally produces more complex aberrations involving exchanges between three or more DSB. While some models have successfully provided quantitative predictions of high-LET radiation induced complex aberrations in human lymphocytes, applying such models to other cell types requires special considerations due to the lack of geometric symmetry of the nucleus. Chromosome aberration data for non-spherical human fibroblast cells bombarded from various directions by high-LET charged particles will be presented, and their implication on physical modeling will be discussed.

  1. Comparing Simulation Output Accuracy of Discrete Event and Agent Based Models: A Quantitive Approach

    CERN Document Server

    Majid, Mazlina Abdul; Siebers, Peer-Olaf

    2010-01-01

    In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methids. In a second step a multi-scenario experimen...

  2. A simplified MHD model of capillary Z-Pinch compared with experiments

    Energy Technology Data Exchange (ETDEWEB)

    Shapolov, A.A.; Kiss, M.; Kukhlevsky, S.V. [Institute of Physics, University of Pecs (Hungary)

    2016-11-15

    The most accurate models of the capillary Z-pinches used for excitation of soft X-ray lasers and photolithography XUV sources currently are based on the magnetohydrodynamics theory (MHD). The output of MHD-based models greatly depends on details in the mathematical description, such as initial and boundary conditions, approximations of plasma parameters, etc. Small experimental groups who develop soft X-ray/XUV sources often use the simplest Z-pinch models for analysis of their experimental results, despite of these models are inconsistent with the MHD equations. In the present study, keeping only the essential terms in the MHD equations, we obtained a simplified MHD model of cylindrically symmetric capillary Z-pinch. The model gives accurate results compared to experiments with argon plasmas, and provides simple analysis of temporal evolution of main plasma parameters. The results clarify the influence of viscosity, heat flux and approximations of plasma conductivity on the dynamics of capillary Z-pinch plasmas. The model can be useful for researchers, especially experimentalists, who develop the soft X-ray/XUV sources. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Comparing and Contrasting View-Based and 3D Models of Navigation

    Directory of Open Access Journals (Sweden)

    A Glennerster

    2011-04-01

    Full Text Available What does it mean to store a 3D location and hence to navigate towards it? Participants in our experiment carried out a homing task on the scale of a room (ie, 3-4m square in an immersive virtual reality environment. In interval one, they were shown three very long coloured vertical poles from one viewing location with some head movement permitted. The poles were easily distinguishable and designed to have constant angular width irrespective of viewing distance. Participants were then transported (virtually to another location in the scene, and in interval two they tried to navigate to the initial viewing point relative to the poles. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on pole configuration and goal location. We compared the ability of two types of models to predict these variations in the distribution of errors: (i view-based models, based on simple features such as angles between poles from the cyclopean point, ratios of these angles, or various disparity measures and (ii Cartesian models based on a probabilistic 3D reconstruction of the scene geometry. For our data, we find that view-based models capture important characteristics of the end-point distributions very well whereas 3D-based models fare less well. In some ways, a Cartesian model is a very particular case of a view-based model: the two are not as different from one another as they first appear.

  4. Annealing to sequences within the primer binding site loop promotes an HIV-1 RNA conformation favoring RNA dimerization and packaging.

    Science.gov (United States)

    Seif, Elias; Niu, Meijuan; Kleiman, Lawrence

    2013-10-01

    The 5' untranslated region (5' UTR) of HIV-1 genomic RNA (gRNA) includes structural elements that regulate reverse transcription, transcription, translation, tRNA(Lys3) annealing to the gRNA, and gRNA dimerization and packaging into viruses. It has been reported that gRNA dimerization and packaging are regulated by changes in the conformation of the 5'-UTR RNA. In this study, we show that annealing of tRNA(Lys3) or a DNA oligomer complementary to sequences within the primer binding site (PBS) loop of the 5' UTR enhances its dimerization in vitro. Structural analysis of the 5'-UTR RNA using selective 2'-hydroxyl acylation analyzed by primer extension (SHAPE) shows that the annealing promotes a conformational change of the 5' UTR that has been previously reported to favor gRNA dimerization and packaging into virus. The model predicted by SHAPE analysis is supported by antisense experiments designed to test which annealed sequences will promote or inhibit gRNA dimerization. Based on reports showing that the gRNA dimerization favors its incorporation into viruses, we tested the ability of a mutant gRNA unable to anneal to tRNA(Lys3) to be incorporated into virions. We found a ∼60% decrease in mutant gRNA packaging compared with wild-type gRNA. Together, these data further support a model for viral assembly in which the initial annealing of tRNA(Lys3) to gRNA is cytoplasmic, which in turn aids in the promotion of gRNA dimerization and its incorporation into virions.

  5. Favorable outcome in open globe injuries with low OTS score.

    Science.gov (United States)

    Cillino, Giovanni; Ferraro, Lucia; Casuccio, Alessandra; Cillino Salvatore

    2014-09-01

    Open globe eye injuries can have profound social and economic consequences. Here, we describe two cases of war and outdoor activity open globe eye injury where, despite a low OTS score, current microsurgical technology allowed for a favorable outcome. A 33-year-old Libyan soldier had been treated for an open-globe grenade blast trauma to his left eye, which showed light perception and OTS score 2. He had undergone a lensectomy and PPV with silicone oil tamponade. Surgical treatment included scleral buckling, cornea trephination, temporary Eckardt keratoprosthesis, PPV revision, intraocular lens (IOL) implantation, and corneal grafting. Six months later, his VA was improved to 20/70. CASE REPORT 2: A 35-year-old man presented with a corneal laceration in his left eye from a meat skewer, with marked hypotony and LP. After primary corneal wound closure, B-scan ultrasonography revealed massive vitreous hemorrhage (OTS score 2). The patient underwent open cataract extraction with IOL implantation, 23 gauge PPV, laser photocoagulation of the retinochoroidal laceration, and a gas tamponade. After three weeks, the patient underwent a 2nd 23G PPV due to a fibrinous reaction. Six month later, the patients exhibited 20/25 VA. These cases confirm that even for patients with a low OTS and poor visual prognosis, an up-to-date surgery protocol may achieve visual results adequate for leading an autonomous daily life.

  6. Musical FAVORS: Reintroducing music to adult cochlear implant users.

    Science.gov (United States)

    Plant, Geoff

    2015-09-01

    Music represents a considerable challenge for many adult users of cochlear implants (CIs). Around half of adult CI users report that they do not find music enjoyable, and, in some cases, despite enhanced speech perception skills, this leads to considerable frustration and disappointment for the CI user. This paper presents suggestions to improve the musical experiences of deafened adults with CIs. Interviews with a number of adult CI users revealed that there were a number of factors which could lead to enhanced music experiences. The acronym FAVORS (familiar music, auditory-visual access, open-mindedness, and simple arrangements) summarizes the factors that have been identified, which can help CI users in their early music listening experiences. Each of these factors is discussed in detail, along with suggestions for how they can be used in therapy sessions. The use of a group approach (music focus groups) is also discussed and an overview of the approach and exercises used is presented. The importance of live music experiences is also discussed.

  7. Favorable Outcome in Open Globe Injuries with Low OTS Score

    Institute of Scientific and Technical Information of China (English)

    Cillino Giovanni; Ferraro Lucia; Casuccio Alessandra; Cillino Salvatore

    2014-01-01

    Purpose:.Open globe eye injuries can have profound social and economic consequences. Here, we describe two cases of war and outdoor activity open globe eye injury where, despite a low OTS score,.current microsurgical technology allowed for a favorable outcome.Case report 1: A 33-year-old Libyan soldier had been treated for an open-globe grenade blast trauma to his left eye, which showed light perception and OTS score 2..He had undergone a lensectomy and PPV with silicone oil tamponade. Surgical treatment included scleral buckling,.cornea trephination, tem-porary Eckardt keratoprosthesis, PPV revision, intraocular lens (IOL) implantation, and corneal grafting. Six months later, his VA was improved to 20 / 70.Case report 2:.A 35-year-old man presented with a corneal laceration in his left eye from a meat skewer,.with marked hypotony and LP..After primary corneal wound closure,.B-scan ultrasonography revealed massive vitreous hemorrhage (OTS score 2). The patient underwent open cataract extraction with IOL implantation, 23 gauge PPV, laser photocoagulation of the retinochoroidal laceration, and a gas tamponade. After three weeks,.the patient underwent a 2nd 23G PPV due to a fibrinous reaction. Six month later, the patients exhibited 20 /25 VA.Conclusion:.These cases confirm that even for patients with a low OTS and poor visual prognosis,.an up-to-date surgery protocol may achieve visual results adequate for leading an autonomous daily life. (Eye Science 2014; 29:170-173)

  8. Adaptation of Cupriavidus necator to conditions favoring polyhydroxyalkanoate production.

    Science.gov (United States)

    Cavalheiro, João M B T; de Almeida, M Catarina M D; da Fonseca, M Manuela R; de Carvalho, Carla C C R

    2012-12-15

    The fatty acid (FA) composition of the bacterial membrane of Cupriavidus necator DSM 545 was assessed during the time course of two-stage fed-batch cultivations for the production of short-chain polyhydroxyalkanoates (PHA). Changes in the relative proportion of straight, methyl and cyclopropyl saturated, unsaturated, hydroxy substituted and polyunsaturated FA were observed, depending on the C sources and cultivation conditions used to favor the synthesis of poly(3-hydroxybutyrate) (P(3HB)), poly(3-hydroxybutyrate-co-4-hydroxybutyrate) (P(3HB-co-4HB)) or poly(3-hydroxybutyrate-4-hydroxybutyrate-3-hydroxyvalerate) (P(3HB-4HB-3HV)), under N limiting conditions. The relative percentage of each FA class was studied using glucose or waste glycerol (GRP), as main C source for P(3HB) production. The FA profile was also assessed when GRP was used together with i) γ-butyrolactone (GBL) (precursor of 4HB monomers) for P(3HB-4HB) synthesis and ii) GBL and propionic acid (PA) (3HV precursor) to yield P(3HB-4HB-3HV). The effect of GBL and PA utilization as PHA monomer precursors on the FA profile of the cell membrane was studied under two different dissolved oxygen concentrations (DOC). Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Ecological conditions favoring budding in colonial organisms under environmental disturbance.

    Directory of Open Access Journals (Sweden)

    Mayuko Nakamaru

    Full Text Available Dispersal is a topic of great interest in ecology. Many organisms adopt one of two distinct dispersal tactics at reproduction: the production of small offspring that can disperse over long distances (such as seeds and spawned eggs, or budding. The latter is observed in some colonial organisms, such as clonal plants, corals and ants, in which (superorganisms split their body into components of relatively large size that disperse to a short distance. Contrary to the common dispersal viewpoint, short-dispersal colonial organisms often flourish even in environments with frequent disturbances. In this paper, we investigate the conditions that favor budding over long-distance dispersal of small offspring, focusing on the life history of the colony growth and the colony division ratio. These conditions are the relatively high mortality of very small colonies, logistic growth, the ability of dispersers to peacefully seek and settle unoccupied spaces, and small spatial scale of environmental disturbance. If these conditions hold, budding is advantageous even when environmental disturbance is frequent. These results suggest that the demography or life history of the colony underlies the behaviors of the colonial organisms.

  10. A comparative assessment of efficient uncertainty analysis techniques for environmental fate and transport models: application to the FACT model

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2005-06-01

    This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.

  11. Markov Chains Used to Determine the Model of Stock Value and Compared with Other New Models of Stock Value (P/E Model and Ohlson Model

    Directory of Open Access Journals (Sweden)

    Abbasali Pouraghajan

    2012-10-01

    Full Text Available The aim of this study a comparison between the three models for the valuation of stocks in Tehran Stock Exchange. These three names PE, Olson or residual income and a Markov chain (Markov are. Researchers in their study were to calculate the valuation of shares in the first two terms and then calculate the value of the enamel Markov chain to achieve a comparative mode. Result of research shows that almost in all cases, there is no significant difference between explanatory power of these models in determining shares value and investments in Tehran exchange market can for assessment of shares uses from these 3 models, but in most cases residual income assessment model by considering less standard error of regression can say, partly is better model in determining the company's value which maybe the main reason be have high explanatory power of two dependent profit variable overall, and book value of share holder's salary by using the overall accounting relation in comparison with two other models.

  12. Comparative modeling of vertical and planar organic phototransistors with 2D drift-diffusion simulations

    Science.gov (United States)

    Bezzeccheri, E.; Colasanti, S.; Falco, A.; Liguori, R.; Rubino, A.; Lugli, P.

    2016-05-01

    Vertical Organic Transistors and Phototransistors have been proven to be promising technologies due to the advantages of reduced channel length and larger sensitive area with respect to planar devices. Nevertheless, a real improvement of their performance is subordinate to the quantitative description of their operation mechanisms. In this work, we present a comparative study on the modeling of vertical and planar Organic Phototransistor (OPT) structures. Computer-based simulations of the devices have been carried out with Synopsys Sentaurus TCAD in a 2D Drift-Diffusion framework. The photoactive semiconductor material has been modeled using the virtual semiconductor approach as the archetypal P3HT:PC61BM bulk heterojunction. It has been found that both simulated devices have comparable electrical and optical characteristics, accordingly to recent experimental reports on the subject.

  13. Using RosettaLigand for small molecule docking into comparative models.

    Directory of Open Access Journals (Sweden)

    Kristian W Kaufmann

    Full Text Available Computational small molecule docking into comparative models of proteins is widely used to query protein function and in the development of small molecule therapeutics. We benchmark RosettaLigand docking into comparative models for nine proteins built during CASP8 that contain ligands. We supplement the study with 21 additional protein/ligand complexes to cover a wider space of chemotypes. During a full docking run in 21 of the 30 cases, RosettaLigand successfully found a native-like binding mode among the top ten scoring binding modes. From the benchmark cases we find that careful template selection based on ligand occupancy provides the best chance of success while overall sequence identity between template and target do not appear to improve results. We also find that binding energy normalized by atom number is often less than -0.4 in native-like binding modes.

  14. Comparative analysis of cognitive tasks for modeling mental workload with electroencephalogram.

    Science.gov (United States)

    Hwang, Taeho; Kim, Miyoung; Hwangbo, Minsu; Oh, Eunmi

    2014-01-01

    Previous electroencephalogram (EEG) studies have shown that cognitive workload can be estimated by using several types of cognitive tasks. In this study, we attempted to characterize cognitive tasks that have been used to manipulate workload for generating classification models. We carried out a comparative analysis between two representative types of working memory tasks: the n-back task and the mental arithmetic task. Based on experiments with 7 healthy subjects using Emotiv EPOC, we compared the consistency, robustness, and efficiency of each task in determining cognitive workload in a short training session. The mental arithmetic task seems consistent and robust in manipulating clearly separable high and low levels of cognitive workload with less training. In addition, the mental arithmetic task shows consistency despite repeated usage over time and without notable task adaptation in users. The current study successfully quantifies the quality and efficiency of cognitive workload modeling depending on the type and configuration of training tasks.

  15. A comparative study of cosmological models in alternative theory of gravity with LVDP & BVDP

    Science.gov (United States)

    Mishra, R. K.; Chand, Avtar

    2017-08-01

    In this communication we have presented a comparative study of Friedmann-Lemaître-Robertson-Walker (FLRW) cosmological models in alternative theory of gravity with linearly varying deceleration parameter (LVDP) and bilinearly varying deceleration parameter (BVDP) as suggested by Mishra and Chand (Astrophys. Space Sci. 361:259, 2016c). The role of viscosity in cosmology have been studied by several researchers. Under the influence of such study we have also studied bulk viscous fluid cosmological models in alternative f(R,T) theory of gravity along with comparison of results by taking LVDP and BVDP. The main conclusion of the paper is that BVDP law provides better results in comparative with Berman's constant deceleration law (CDP) and LVDP law. Also the BVDP law provides an envelope for CDP law and LVDP law.

  16. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  17. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  18. Comparative Efficacies of Antibiotics in a Rat Model of Meningoencephalitis Due to Listeria monocytogenes

    OpenAIRE

    Michelet, Christian; Leib, Stephen L.; Bentue-Ferrer, Daniele; Täuber, Martin G.

    1999-01-01

    The antibacterial activities of amoxicillin-gentamicin, trovafloxacin, trimethoprim-sulfamethoxazole (TMP-SMX) and the combination of trovafloxacin with TMP-SMX were compared in a model of meningoencephalitis due to Listeria monocytogenes in infant rats. At 22 h after intracisternal infection, the cerebrospinal fluid was cultured to document meningitis, and the treatment was started. Treatment was instituted for 48 h, and efficacy was evaluated 24 h after administration of the last dose. All ...

  19. Comparing physically-based and statistical landslide susceptibility model outputs - a case study from Lower Austria

    Science.gov (United States)

    Canli, Ekrem; Thiebes, Benni; Petschko, Helene; Glade, Thomas

    2015-04-01

    By now there is a broad consensus that due to human-induced global change the frequency and magnitude of heavy precipitation events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as the most common triggering agent for landslide initiation, also an increased landside activity can be expected there. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled. The present and well known problems in modelling landslide susceptibility and hazard give uncertain results in the prediction. This includes the lack of a universal applicable modelling solution for adequately assessing landslide susceptibility (which can be seen as the relative indication of the spatial probability of landslide initiation). Generally speaking, there are three major approaches for performing landslide susceptibility analysis: heuristic, statistical and deterministic models, all with different assumptions, its distinctive data requirements and differently interpretable outcomes. Still, detailed comparison of resulting landslide susceptibility maps are rare. In this presentation, the susceptibility modelling outputs of a deterministic model (Stability INdex MAPping - SINMAP) and a statistical modelling approach (generalized additive model - GAM) are compared. SINMAP is an infinite slope stability model which requires parameterization of soil mechanical parameters. Modelling with the generalized additive model, which represents a non-linear extension of a generalized linear model, requires a high quality landslide inventory that serves as the dependent variable in the statistical approach. Both methods rely on topographical data derived from the DTM. The comparison has been carried out in a study area located in the district of Waidhofen/Ybbs in Lower Austria. For the whole district (ca. 132 km²), 1063 landslides have been mapped and partially used within the analysis and the validation of the model outputs. The respective

  20. A comparative study of two prediction models for brain tumor progression

    Science.gov (United States)

    Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang

    2015-03-01

    MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were

  1. Task decomposition: a framework for comparing diverse training models in human brain plasticity studies

    Directory of Open Access Journals (Sweden)

    Emily B. J. Coffey

    2013-10-01

    Full Text Available Training studies, in which the structural or functional neurophysiology is compared before and after expertise is acquired, are increasingly being used as models for understanding the human brain’s potential for reorganization. It is proving difficult to use these results to answer basic and important questions like how task training leads to both specific and general changes in behaviour and how these changes correspond with modifications in the brain. The main culprit is the diversity of paradigms used as complex task models. An assortment of activities ranging from juggling to deciphering Morse code has been reported. Even when working in the same general domain, few researchers use similar training models. New ways to meaningfully compare complex tasks are needed. We propose a method for characterizing and deconstructing the task requirements of complex training paradigms, which is suitable for application to both structural and functional neuroimaging studies. We believe this approach will aid brain plasticity research by making it easier to compare training paradigms, identify ‘missing puzzle pieces’, and encourage researchers to design training protocols to bridge these gaps.

  2. Thermodynamic Molecular Switch in Sequence-Specific Hydrophobic Interaction: Two Computational Models Compared

    Directory of Open Access Journals (Sweden)

    Paul Chun

    2003-01-01

    Full Text Available We have shown in our published work the existence of a thermodynamic switch in biological systems wherein a change of sign in ΔCp°(Treaction leads to a true negative minimum in the Gibbs free energy change of reaction, and hence, a maximum in the related Keq. We have examined 35 pair-wise, sequence-specific hydrophobic interactions over the temperature range of 273–333 K, based on data reported by Nemethy and Scheraga in 1962. A closer look at a single example, the pair-wise hydrophobic interaction of leucine-isoleucine, will demonstrate the significant differences when the data are analyzed using the Nemethy-Scheraga model or treated by the Planck-Benzinger methodology which we have developed. The change in inherent chemical bond energy at 0 K, ΔH°(T0 is 7.53 kcal mol-1 compared with 2.4 kcal mol-1, while ‹ts› is 365 K as compared with 355 K, for the Nemethy-Scheraga and Planck-Benzinger model, respectively. At ‹tm›, the thermal agitation energy is about five times greater than ΔH°(T0 in the Planck-Benzinger model, that is 465 K compared to 497 K in the Nemethy-Scheraga model. The results imply that the negative Gibbs free energy minimum at a well-defined ‹ts›, where TΔS° = 0 at about 355 K, has its origin in the sequence-specific hydrophobic interactions, which are highly dependent on details of molecular structure. The Nemethy-Scheraga model shows no evidence of the thermodynamic molecular switch that we have found to be a universal feature of biological interactions. The Planck-Benzinger method is the best known for evaluating the innate temperature-invariant enthalpy, ΔH°(T0, and provides for better understanding of the heat of reaction for biological molecules.

  3. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  4. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  5. Badminton instructional in Malaysian schools: a comparative analysis of TGfU and SDT pedagogical models.

    Science.gov (United States)

    Nathan, Sanmuga

    2016-01-01

    Model based physical education curriculum of Teaching Games for Understanding (TGfU) is still at early stage of implementation in Malaysian schools whereby the technical or skill-led model continues to dominate the physical education curriculum. Implementing TGfU seems to be problematic and untested in this environment. Therefore, this study examined, the effects that a revised model of TGfU compared to Skill Drill Technical (SDT) a technical model had on learning movement skills in Badminton, including returning to base, decision making and skill execution whilst performing in a doubles game play and also explored teachers' perceptions of navigating between the two models. Participants aged 15.5 ± 1.0 years, N = 32, school Badminton players were randomly selected and assigned equally into groups of TGfU and SDT. Reflective data was gathered from two experienced physical education teachers who were involved in this study. Findings indicated for movement to the base in doubles game play indicated significant improvement, after intervention via TGfU. As for decision-making and skill execution in doubles game play, analysis revealed no significant difference after intervention. Findings from teachers reflection, indicated the importance of mini game play in both TGfU and SDT models, as the students enjoyed, and built up positive attitudes for both winning or losing in game situations. However, when negotiating the TGfU model, the teacher found it difficult at times to execute the pedagogical model, as students needed guidance to discuss aspects related to tactics. However, to keep this pedagogical model viable further research findings ought to be circulated among teachers in Malaysia and similar Southeast Asian counties.

  6. A comparative study of independent particle model based approaches for thermal averages

    Indian Academy of Sciences (India)

    Subrata Banik; Tapta Kanchan Roy; M Durga Prasad

    2013-09-01

    A comparative study is done on thermal average calculation by using the state specific vibrational self-consistent field method (ss-VSCF), the virtual vibrational self-consistent field (v-VSCF) method and the thermal self-consistent field (t-SCF) method. The different thermodynamic properties and expectation values are calculated using these three methods and the results are compared with full configuration interaction method (FVCI). We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives better accuracy compared to the v-VSCF counterpart especially at higher temperatures.

  7. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    Science.gov (United States)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  8. Realistic and Spherical Head Modeling for EEG Forward Problem Solution: A Comparative Cortex-Based Analysis

    Science.gov (United States)

    Vatta, Federica; Meneghini, Fabio; Esposito, Fabrizio; Mininel, Stefano; Di Salle, Francesco

    2010-01-01

    The accuracy of forward models for electroencephalography (EEG) partly depends on head tissues geometry and strongly affects the reliability of the source reconstruction process, but it is not yet clear which brain regions are more sensitive to the choice of different model geometry. In this paper we compare different spherical and realistic head modeling techniques in estimating EEG forward solutions from current dipole sources distributed on a standard cortical space reconstructed from Montreal Neurological Institute (MNI) MRI data. Computer simulations are presented for three different four-shell head models, two with realistic geometry, either surface-based (BEM) or volume-based (FDM), and the corresponding sensor-fitted spherical-shaped model. Point Spread Function (PSF) and Lead Field (LF) cross-correlation analyses were performed for 26 symmetric dipole sources to quantitatively assess models' accuracy in EEG source reconstruction. Realistic geometry turns out to be a relevant factor of improvement, particularly important when considering sources placed in the temporal or in the occipital cortex. PMID:20169107

  9. Modelling submerged coastal environments: Remote sensing technologies, techniques, and comparative analysis

    Science.gov (United States)

    Dillon, Chris

    Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.

  10. The solvatochromism of phenolate betaines: comparing different cavities of a polarized continuum model.

    Science.gov (United States)

    Rezende, Marcos Caroli; Domínguez, Moisés

    2015-08-01

    Two variations of the polarized continuum model employing default ("PCM model") and SMD radii ("SMD model") were compared for the reproduction of the solvatochromic behavior of Reichardt's betaine dye, and of eight other phenolate betaines that exhibit a negative, positive or an inverted solvatochromic behavior. Molecules were optimized at the CAM B3LYP/6-31+G(d,p) level of theory, and transition energies were calculated with the TD-DFT method. The PCM model failed to reproduce the negative and the inverted solvachromism of these dyes in protic solvents. The SMD model, though not entirely accounting for hydrogen-bond effects in small, polar hydroxylic solvents, should be recommended as a better alternative for the theoretical simulation of the solvatochromism of phenolate betaines in medium to highly polar solvents. Graphical Abstract A comparison of two polarized continuum models ("default PCM" and "PCM/SMD") for reproducing the solvatochromism of phenolate betaines, with nine examples of negative, positive, and inverted behavior.

  11. On biodiversity in river networks: A trade-off metapopulation model and comparative analysis

    Science.gov (United States)

    Muneepeerakul, R.; Levin, S. A.; Rinaldo, A.; Rodriguez-Iturbe, I.

    2007-07-01

    A discrete, structured metapopulation model is coupled with the strictly hierarchical competition-colonization trade-off model, in which competitively superior species have lower fecundity rates and thus lower colonizing ability, to study the resulting biodiversity patterns in river networks. These patterns are then compared with those resulting from the neutral dynamics, in which every species has the same fecundity rate and is competitively equivalent at a per capita level. Significant differences exist between riparian biodiversity patterns and those predicted by theories developed for two-dimensional landscapes. We find that dispersal directionality and network structure promote species that produce a large number of propagules at a species level; such species are considered competitively superior in the neutral model and inferior in the trade-off model. As a result, the two key characteristics of riparian systems, dispersal directionality and network structure, lead to lower and higher overall γ diversity in the former and the latter models, respectively. The network structure, through the containment effect due to limited cross-basin dispersal, always leads to higher between-community, β diversity. The spatial distribution of local, α diversity becomes heterogeneous and thus important under directional dispersal and network structure. A higher degree of dividedness results in higher γ diversity for communities obeying both neutral and trade-off models, but the increase is more dramatic in the latter.

  12. Comparative Study of Shrinkage and Non-Shrinkage Model of Food Drying

    Science.gov (United States)

    Shahari, N.; Jamil, N.; Rasmani, KA.

    2016-08-01

    A single phase heat and mass model has always been used to represent the moisture and temperature distribution during the drying of food. Several effects of the drying process, such as physical and structural changes, have been considered in order to increase understanding of the movement of water and temperature. However, the comparison between the heat and mass equation with and without structural change (in terms of shrinkage), which can affect the accuracy of the prediction model, has been little investigated. In this paper, two mathematical models to describe the heat and mass transfer in food, with and without the assumption of structural change, were analysed. The equations were solved using the finite difference method. The converted coordinate system was introduced within the numerical computations for the shrinkage model. The result shows that the temperature with shrinkage predicts a higher temperature at a specific time compared to that of the non-shrinkage model. Furthermore, the predicted moisture content decreased faster at a specific time when the shrinkage effect was included in the model.

  13. Comparative evaluation of a newly developed 13-valent pneumococcal conjugate vaccine in a mouse model.

    Science.gov (United States)

    Park, Chulmin; Kwon, Eun-Young; Choi, Su-Mi; Cho, Sung-Yeon; Byun, Ji-Hyun; Park, Jung Yeon; Lee, Dong-Gun; Kang, Jin Han; Shin, Jinhwan; Kim, Hun

    2016-12-14

    Animal models facilitate evaluation of vaccine efficacy at relatively low cost. This study was a comparative evaluation of the immunogenicity and protective efficacy of a new 13-valent pneumococcal conjugate vaccine (PCV13) with a control vaccine in a mouse model. After vaccination, anti-capsular antibody levels were evaluated by pneumococcal polysaccharide (PnP) enzyme-linked immunosorbent assay (ELISA) and opsonophagocytic killing assay (OPA). Also, mice were challenged intraperitoneally with 100-fold of the 50% lethal dose of Streptococcus pneumoniae. The anti-capsular IgG levels against serotypes 1, 4, 7F, 14, 18C, 19A, and 19F were high (quartile 2 >1,600), while those against the other serotypes were low (Q2 ≤ 800). Also, the OPA titres were similar to those determined by PnP ELISA. Comparative analysis between new PCV13 and control vaccination group in a mouse model exhibited significant differences in serological immunity of a few serotypes and the range of anti-capsular IgG in the population. Challenge of wild-type or neutropenic mice with serotypes 3, 5, 6A, 6B, and 9V showed protective immunity despite of induced relatively low levels of anti-capsular antibodies. With comparison analysis, a mouse model should be adequate for evaluating serological efficacy and difference in the population level as preclinical trial.

  14. Comparative analysis of CFD models for jetting fluidized beds: Effect of particle-phase viscosity

    Institute of Scientific and Technical Information of China (English)

    Pei Pei; Kai Zhang; Gang Xu; Yongping Yang; Dongsheng Wen

    2012-01-01

    Under the Eulerian-Eulerian framework of simulating gas-solid two-phase flow,the accuracy of the hydrodynamic prediction is strongly affected by the selection of rheology of the particulate phase,for which a detailed assessment is still absent.Using a jetting fluidized bed as an example,this work investigates the influence of solid theology on the hydrodynamic behavior by employing different particle-phase viscosity models.Both constant particle-phase viscosity model (CVM) with different viscosity values and a simple two-fluid model without particle-phase viscosity (NVM) are incorporated into the classical twofluid model and compared with the experimental measurements.Qualitative and quantitative results show that the jet penetration depth,jet frequency and averaged bed pressure drop are not a strong function of the particle-phase viscosity.Compared to CVM,the NVM exhibits better predictions on the jet behaviors,which is more suitable for investigating the hydrodynamics of gas-solid fluidized bed with a central jet.

  15. Comparing the landcapes of common retroviral insertion sites across tumor models

    Science.gov (United States)

    Weishaupt, Holger; Čančer, Matko; Engström, Cristopher; Silvestrov, Sergei; Swartling, Fredrik J.

    2017-01-01

    Retroviral tagging represents an important technique, which allows researchers to screen for candidate cancer genes. The technique is based on the integration of retroviral sequences into the genome of a host organism, which might then lead to the artificial inhibition or expression of proximal genetic elements. The identification of potential cancer genes in this framework involves the detection of genomic regions (common insertion sites; CIS) which contain a number of such viral integration sites that is greater than expected by chance. During the last two decades, a number of different methods have been discussed for the identification of such loci and the respective techniques have been applied to a variety of different retroviruses and/or tumor models. We have previously established a retrovirus driven brain tumor model and reported the CISs which were found based on a Monte Carlo statistics derived detection paradigm. In this study, we consider a recently proposed alternative graph theory based method for identifying CISs and compare the resulting CIS landscape in our brain tumor dataset to those obtained when using the Monte Carlo approach. Finally, we also employ the graph-based method to compare the CIS landscape in our brain tumor model with those of other published retroviral tumor models.

  16. Is tuberculosis treatment really free in China? A study comparing two areas with different management models.

    Directory of Open Access Journals (Sweden)

    Sangsang Qiu

    Full Text Available China has implemented a free-service policy for tuberculosis. However, patients still have to pay a substantial proportion of their annual income for treatment of this disease. This study describes the economic burden on patients with tuberculosis; identifies related factors by comparing two areas with different management models; and provides policy recommendation for tuberculosis control reform in China.There are three tuberculosis management models in China: the tuberculosis dispensary model, specialist model and integrated model. We selected Zhangjiagang (ZJG and Taixing (TX as the study sites, which correspond to areas implementing the integrated model and dispensary model, respectively. Patients diagnosed and treated for tuberculosis since January 2010 were recruited as study subjects. A total of 590 patients (316 patients from ZJG and 274 patients from TX were interviewed with a response rate of 81%. The economic burden attributed to tuberculosis, including direct costs and indirect costs, was estimated and compared between the two study sites. The Mann-Whitney U Test was used to compare the cost differences between the two groups. Potential factors related to the total out-of-pocket costs were analyzed based on a step-by-step multivariate linear regression model after the logarithmic transformation of the costs.The average (median, interquartile range total cost was 18793.33 (9965, 3200-24400 CNY for patients in ZJG, which was significantly higher than for patients in TX (mean: 6598.33, median: 2263, interquartile range: 983-6688 (Z = 10.42, P < 0.001. After excluding expenses covered by health insurance, the average out-of-pocket costs were 14304.4 CNY in ZJG and 5639.2 CNY in TX. Based on the multivariable linear regression analysis, factors related to the total out-of-pocket costs were study site, age, number of clinical visits, residence, diagnosis delay, hospitalization, intake of liver protective drugs and use of the second

  17. Comparing wall modeled LES and prescribed boundary layer approach in infinite wind farm simulations

    DEFF Research Database (Denmark)

    Sarlak, Hamid; Mikkelsen, Robert; Sørensen, Jens Nørkær

    2015-01-01

    This paper aims at presenting a simple and computationally fast method for simulation of the Atmospheric Boundary Layer (ABL) and comparing the results with the commonly used wall-modelled Large Eddy Simulation (WMLES). The simple method, called Prescribed Mean Shear and Turbulence (PMST) hereafter......, is based on imposing body forces over the whole domain to maintain a desired unsteady ow, where the ground is modeled as a slip-free boundary which in return hampers the need for grid refinement and/or wall modeling close to the solid walls. Another strength of this method besides being computationally...... inexpensive, is high flexibility meaning that the imposed boundary layer can be read from another CFD simulation, or from site measurements. For fundamental studies focusing on the wake structures rather than ABL for example, the grid can be refined in the rotor region and any desired shear layer can...

  18. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    Science.gov (United States)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  19. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    DEFF Research Database (Denmark)

    Salo, T J; Palosuo, T; Kersebaum, K C

    2016-01-01

    , Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included six N rates ranging from 0 to 150 kg N/ha. Calibration data consisted of weather, soil, phenology, leaf area...... index (LAI) and yield observations. The models were then tested against new data for 2009 and their performance was assessed and compared with both the two calibration years and the test year. For the calibration period, root mean square error between measurements and simulated grain dry matter yields...... mineralization as a function of soil temperature and moisture. Furthermore, specific weather event impacts such as low temperatures after emergence in 2009, tending to enhance tillering, and a high precipitation event just before harvest in 2008, causing possible yield penalties, were not captured by any...

  20. Comparative study: TQ and Lean Production ownership models in health services

    Directory of Open Access Journals (Sweden)

    Natalia Yuri Eiro

    2015-10-01

    Full Text Available Objective: compare the application of Total Quality (TQ models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model.Method: this is a qualitative research that was conducted through a descriptive case study.Results:through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below.Conclusion: the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  1. Review of colorectal cancer and its metastases in rodent models: comparative aspects with those in humans

    DEFF Research Database (Denmark)

    Kobaek-Larsen, M; Thorup, I; Diederichsen, Axel Cosmus Pyndt;

    2000-01-01

    that human trials become more directed, with greater chances of success. The orthotopic transplantation of colon cancer cells into the cecum of syngeneic animals or intraportal inoculation appears to resemble the human metastatic disease most closely, providing a model for study of the treatment......BACKGROUND AND PURPOSE: Colorectal cancer (CRC) remains one of the most common cancer forms developing in industrialized countries, and its incidence appears to be rising. Studies of human population groups provide insufficient information about carcinogenesis, pathogenesis, and treatment of CRC...... models approximate many of the characteristics of human colonic carcinogenesis and metastasis. So far few comparative evaluations of the various animal models of CRC have been made. CONCLUSION: Animal studies cannot replace human clinical trials, but they can be used as a pre-screening tool, so...

  2. Comparative Results on 3D Navigation of Quadrotor using two Nonlinear Model based Controllers

    Science.gov (United States)

    Bouzid, Y.; Siguerdidjane, H.; Bestaoui, Y.

    2017-01-01

    Recently the quadrotors are being increasingly employed in both military and civilian areas where a broad range of nonlinear flight control techniques are successfully implemented. With this advancement, it has become necessary to investigate the efficiency of these flight controllers by studying theirs features and compare their performance. In this paper, the control of Unmanned Aerial Vehicle (UAV) quadrotor, using two different approaches, is presented. The first controller is Nonlinear PID (NLPID) whilst the second one is Nonlinear Internal Model Control (NLIMC) that are used for the stabilization as well as for the 3D trajectory tracking. The numerical simulations have shown satisfactory results using nominal system model or disturbed model for both of them. The obtained results are analyzed with respect to several criteria for the sake of comparison.

  3. A comparative study of two models for the seismic analysis of buildings

    Directory of Open Access Journals (Sweden)

    Arnulfo Luévanos Rojas

    2012-12-01

    Full Text Available This paper presents a model for the seismic analysis of buildings, taking two lumped masses at each level in a structure’s free nodes and comparing them to the traditional model which considers lumped masses per level, i.e., a mass for each floor of the entire building. This  is usually done in the seismic analysis of buildings; not all values are conservative in the latter, as can be seen in the table of results. Both models took shear deformations into account. Therefore, the usual practice of considering a lumped mass per each level would not be a recommended solution; using two lumped masses per level is thus proposed and is also more related to real conditions.

  4. A comparative analysis of the two typical farmland transfer models in Chongqing

    Institute of Scientific and Technical Information of China (English)

    肖轶; 魏朝富; 尹珂; 罗光莲

    2009-01-01

    This paper attempts to conduct a comparative analysis of the two typical farmland transfer models introduced by Chongqing in its comprehensive coordinated reform experiment for balanced urban and rural development: i) the "pooling of land as shares" in Qilin village, Changshou district; and ii) the "homestead/house swap, contracted land/ social security swap" in Jiulongpo district. It is estimated that the former model offers lower land appreciation benef its than the latter; the former faces greater operational risks, whereas the latter can to a certain extent mitigate risks by boosting regulatory control and reasonable government guidance. The homestead/house swap, contracted land/social security swap model is therefore the preferred choice. It can solve a series of social security problems that arise after peasants are divorced from land and enable peasants to garner higher land appreciation benefits through farmland transfer.

  5. Comparing Numerical Integration Schemes for Time-Continuous Car-Following Models

    CERN Document Server

    Treiber, Martin

    2014-01-01

    When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the forth-order Runge-Kutta method (RK4) are rarely used while the simple Euler's method is popular among researchers. We compare four explicit methods: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard forth-order RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order~4) only plays a role if both the acceleration function of the model and the external data of the simulation scenario are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's met...

  6. Comparing non-linear mathematical models to describe growth of different animals

    Directory of Open Access Journals (Sweden)

    Jhony Tiago Teleken

    2017-02-01

    Full Text Available The main objective of this study was to compare the goodness of fit of five non-linear growth models, i.e. Brody, Gompertz, Logistic, Richards and von Bertalanffy in different animals. It also aimed to evaluate the influence of the shape parameter on the growth curve. To accomplish this task, published growth data of 14 different groups of animals were used and four goodness of fit statistics were adopted: coefficient of determination (R2, root mean square error (RMSE, Akaike information criterion (AIC and Bayesian information criterion (BIC. In general, the Richards growth equation provided better fits to experimental data than the other models. However, for some animals, different models exhibited better performance. It was obtained a possible interpretation for the shape parameter, in such a way that can provide useful insights to predict animal growth behavior.

  7. Comparative study: TQ and Lean Production ownership models in health services.

    Science.gov (United States)

    Eiro, Natalia Yuri; Torres-Junior, Alvair Silveira

    2015-01-01

    compare the application of Total Quality (TQ) models used in processes of a health service, cases of lean healthcare and literature from another institution that has also applied this model. this is a qualitative research that was conducted through a descriptive case study. through critical analysis of the institutions studied it was possible to make a comparison between the traditional quality approach checked in one case and the theoretical and practice lean production approach used in another case and the specifications are described below. the research identified that the lean model was better suited for people that work systemically and generate the flow. It also pointed towards some potential challenges in the introduction and implementation of lean methods in health.

  8. Comparative analysis of turbulence models for flow simulation around a vertical axis wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Roy, S.; Saha, U.K. [Indian Institute of Technology Guwahati, Dept. of Mechanical Engineering, Guwahati (India)

    2012-07-01

    An unsteady computational investigation of the static torque characteristics of a drag based vertical axis wind turbine (VAWT) has been carried out using the finite volume based computational fluid dynamics (CFD) software package Fluent 6.3. A comparative study among the various turbulence models was conducted in order to predict the flow over the turbine at static condition and the results are validated with the available experimental results. CFD simulations were carried out at different turbine angular positions between 0 deg.-360 deg. in steps of 15 deg.. Results have shown that due to high static pressure on the returning blade of the turbine, the net static torque is negative at angular positions of 105 deg.-150 deg.. The realizable k-{epsilon} turbulent model has shown a better simulation capability over the other turbulent models for the analysis of static torque characteristics of the drag based VAWT. (Author)

  9. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    Energy Technology Data Exchange (ETDEWEB)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna (Dept. of Medical Physics, Karolinska Univ. Hospital and the Karolinska Inst., Stockholm (Sweden)), e-mail: berit.wennberg@karolinska.se (and others)

    2011-05-15

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction alpha/beta = 3 Gy was used and the USC parameters used were: alpha/beta = 3 Gy, D{sub 0} = 1.0 Gy, n = 10, alpha 0.206 Gy-1 and d{sub T} = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D{sub 50} = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  10. Comparing snow models under current and future climates: Uncertainties and implications for hydrological impact studies

    Science.gov (United States)

    Troin, Magali; Poulin, Annie; Baraer, Michel; Brissette, François

    2016-09-01

    Projected climate change effects on snow hydrology are investigated for the 2041-2060 horizon following the SRES A2 emissions scenario over three snowmelt-dominated catchments in Quebec, Canada. A 16-member ensemble of eight snow models (SM) simulations, based on the high-resolution Canadian Regional Climate Model (CRCM-15 km) simulations driven by two realizations of the Canadian Global Climate Model (CGCM3), is established per catchment. This study aims to compare a range of SMs in their ability at simulating snow processes under current climate, and to evaluate how they affect the assessment of the climate change-induced snow impacts at the catchment scale. The variability of snowpack response caused by the use of different models within two different SM approaches (degree-day (DD) versus mixed degree-day/energy balance (DD/EB)) is also evaluated, as well as the uncertainty of natural climate variability. The simulations cover 1961-1990 in the present period and 2041-2060 in the future period. There is a general convergence in the ensemble spread of the climate change signals on snow water equivalent at the catchment scale, with an earlier peak and a decreased magnitude in all basins. The results of four snow indicators show that most of the uncertainty arises from natural climate variability (inter-member variability of the CRCM) followed by the snow model. Both the DD and DD/EB models provide comparable assessments of the impacts of climate change on snow hydrology at the catchment scale.

  11. Polar-Region Distributions of Poynting Flux: Global Models Compared With Observations

    Science.gov (United States)

    Melanson, P. D.; Lotko, W.; Murr, D.; Gagne, J. R.; Wiltberger, M.; Lyon, J. G.

    2007-12-01

    Low-altitude distributions of electric potential, field-aligned current and Poynting flux derived from the Lyon- Fedder-Mobarry global simulation model of the magnetosphere are compared with distributions derived from SuperDARN, the Iridium satellite constellation, and the Weimer 2005 empirical model for a one-hour interval (1400-1500 UT) on 23 November 1999 during which the interplanetary magnetic field was steady and southward. Synthetic measurements along a pseudo-satellite track are also obtained from each distribution and compared with measurements from the DMSP F13 satellite. Previous studies of the event are supplemented here with updated simulation results for the electric potential and field-aligned currents, new simulation diagnostics for the Poynting flux incident on the ionosphere, and comparisons of observational and simulation results with the Weimer empirical model. The location and extent of the simulated Poynting fluxes that occur in the afternoon sector, between the Region-1 and 2 currents, are consistent with the observed and empirically modeled locations, but the magnitudes exhibit significant differences (locally up to ~100% both higher and lower). Elsewhere, the distribution of simulated fluxes more closely resembles the empirically modeled values than the observed ones and in general is greater in magnitude by about 100%. Additionally, the fraction of simulated Poynting flux that flow into the polar cap region (above 75 deg) is about one third of the total flowing into the ionosphere above 60 deg; a similar value is found for both the observed and the empirically modeled fluxes. The effect of including the parallel potential drop in the self-consistent mapping of electric potential between the ionosphere and inner boundary of the simulation domain is also examined. Globally the effect is small (< 5%); however, in regions where the field-aligned potential drop is appreciable, local changes of 100% or more are found in the magnitude of the

  12. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  13. Comparing a Real-Life WSN Platform Small Network and its OPNET Modeler Model using Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    Gilbert E. Pérez

    2014-12-01

    Full Text Available To avoid the high cost and arduous effort usually associated with field analysis of Wireless Sensor Network (WSN, Modeling and Simulation (M&S is used to predict the behavior and performance of the network. However, the simulation models utilized to imitate real life networks are often used for general purpose. Therefore, they are less likely to provide accurate predictions for different real life networks. In this paper, a comparison methodology based on hypothesis testing is proposed to evaluate and compare simulation output versus real-life network measurements. Performance related parameters such as traffic generation rates and goodput rates for a small WSN are considered. To execute the comparison methodology, a "Comparison Tool", composed of MATLAB scripts is developed and used. The comparison tool demonstrates the need for model verification and the analysis of good agreements between the simulation and empirical measurements.

  14. Patients with Spinal Cord Injuries Favor Administration of Methylprednisolone.

    Directory of Open Access Journals (Sweden)

    Christian A Bowers

    Full Text Available Methylprednisolone sodium succinate (MPSS for treatment of acute spinal cord injury (SCI has been associated with both benefits and adverse events. MPSS administration was the standard of care for acute SCI until recently when its use has become controversial. Patients with SCI have had little input in the debate, thus we sought to learn their opinions regarding administration of MPSS. A summary of the published literature to date on MPSS use for acute SCI was created and adjudicated by 28 SCI experts. This summary was then emailed to 384 chronic SCI patients along with a survey that interrogated the patients' neurological deficits, communication with physicians and their views on MPSS administration. 77 out of 384 patients completed the survey. 28 respondents indicated being able to speak early after injury and of these 24 reported arriving at the hospital within 8 hours of injury. One recalled a physician speaking to them about MPSS and one patient reported choosing whether or not to receive MPSS. 59.4% felt that the small neurological benefits associated with MPSS were 'very important' to them (p<0.0001. Patients had 'little concern' for potential side-effects of MPSS (p = 0.001. Only 1.4% felt that MPSS should not be given to SCI patients regardless of degree of injury (p<0.0001. This is the first study to report SCI patients' preferences regarding MPSS treatment for acute SCI. Patients favor the administration of MPSS for acute SCI, however few had input into whether or not it was administered. Conscious patients should be given greater opportunity to decide their treatment. These results also provide some guidance regarding MPSS administration in patients unable to communicate.

  15. A Comparative Study of Two Decision Models: Frisch’s model and a simple Dutch planning model

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1951-01-01

    textabstractThe significance of Frisch's notion of decision models is, in the first place, that they draw full attention upon "inverted problems" which economic policy puts before us. In these problems the data are no longer those in the traditional economic problems, but partly the political goals

  16. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  17. A comparative verification of high resolution precipitation forecasts using model output statistics

    Science.gov (United States)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  18. Comparing models of debris-flow susceptibility in the alpine environment

    Science.gov (United States)

    Carrara, Alberto; Crosta, Giovanni; Frattini, Paolo

    Debris-flows are widespread in Val di Fassa (Trento Province, Eastern Italian Alps) where they constitute one of the most dangerous gravity-induced surface processes. From a large set of environmental characteristics and a detailed inventory of debris flows, we developed five models to predict location of debris-flow source areas. The models differ in approach (statistical vs. physically-based) and type of terrain unit of reference (slope unit vs. grid cell). In the statistical models, a mix of several environmental factors classified areas with different debris-flow susceptibility; however, the factors that exert a strong discriminant power reduce to conditions of high slope-gradient, pasture or no vegetation cover, availability of detrital material, and active erosional processes. Since slope and land use are also used in the physically-based approach, all model results are largely controlled by the same leading variables. Overlaying susceptibility maps produced by the different methods (statistical vs. physically-based) for the same terrain unit of reference (grid cell) reveals a large difference, nearly 25% spatial mismatch. The spatial discrepancy exceeds 30% for susceptibility maps generated by the same method (discriminant analysis) but different terrain units (slope unit vs. grid cell). The size of the terrain unit also led to different susceptibility maps (almost 20% spatial mismatch). Maps based on different statistical tools (discriminant analysis vs. logistic regression) differed least (less than 10%). Hence, method and terrain unit proved to be equally important in mapping susceptibility. Model performance was evaluated from the percentages of terrain units that each model correctly classifies, the number of debris-flow falling within the area classified as unstable by each model, and through the metric of ROC curves. Although all techniques implemented yielded results essentially comparable; the discriminant model based on the partition of the study

  19. Comparative Study of Injury Models for Studying Muscle Regeneration in Mice.

    Directory of Open Access Journals (Sweden)

    David Hardy

    Full Text Available A longstanding goal in regenerative medicine is to reconstitute functional tissues or organs after injury or disease. Attention has focused on the identification and relative contribution of tissue specific stem cells to the regeneration process. Relatively little is known about how the physiological process is regulated by other tissue constituents. Numerous injury models are used to investigate tissue regeneration, however, these models are often poorly understood. Specifically, for skeletal muscle regeneration several models are reported in the literature, yet the relative impact on muscle physiology and the distinct cells types have not been extensively characterised.We have used transgenic Tg:Pax7nGFP and Flk1GFP/+ mouse models to respectively count the number of muscle stem (satellite cells (SC and number/shape of vessels by confocal microscopy. We performed histological and immunostainings to assess the differences in the key regeneration steps. Infiltration of immune cells, chemokines and cytokines production was assessed in vivo by Luminex®.We compared the 4 most commonly used injury models i.e. freeze injury (FI, barium chloride (BaCl2, notexin (NTX and cardiotoxin (CTX. The FI was the most damaging. In this model, up to 96% of the SCs are destroyed with their surrounding environment (basal lamina and vasculature leaving a "dead zone" devoid of viable cells. The regeneration process itself is fulfilled in all 4 models with virtually no fibrosis 28 days post-injury, except in the FI model. Inflammatory cells return to basal levels in the CTX, BaCl2 but still significantly high 1-month post-injury in the FI and NTX models. Interestingly the number of SC returned to normal only in the FI, 1-month post-injury, with SCs that are still cycling up to 3-months after the induction of the injury in the other models.Our studies show that the nature of the injury model should be chosen carefully depending on the experimental design and desired

  20. Bayesian meta-analysis models for microarray data: a comparative study

    Directory of Open Access Journals (Sweden)

    Song Joon J

    2007-03-01

    Full Text Available Abstract Background With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods. Results Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets

  1. IMPACT OF ISLAMIC RELIGIOUS SYMBOL IN PRODUCING FAVORABLE ATTITUDE TOWARD ADVERTISEMENT

    Directory of Open Access Journals (Sweden)

    Abbas NASERI

    2012-06-01

    Full Text Available A review of the literature on religion and advertisement led to the identification of three lines of studies examining the influence of religion on advertising. These three lines of studies focused on attitude toward advertising of controversial products, presence of religious values in advertisements executions, and the consumers’ reactions to advertisement containing religious cues or symbols. The latter line has been followed modestly in Christian context but not in Islamic context of advertising. Hijab as a significant religious cue might peripherally generates a favorable attitude toward advertisement among Muslims. It is suggested that information processing theories like Elaboration Likelihood Model provides a pertinent theoretical framework to examine this effect empirically.

  2. The HII Galaxy Hubble Diagram Strongly Favors $R_{\\rm h}=ct$ over $\\Lambda$CDM

    CERN Document Server

    Wei, Jun-Jie; Melia, Fulvio

    2016-01-01

    We continue to build support for the proposal to use HII galaxies (HIIGx) and giant extragalactic HII regions (GEHR) as standard candles to construct the Hubble diagram at redshifts beyond the current reach of Type Ia supernovae. Using a sample of 25 high-redshift HIIGx, 107 local HIIGx, and 24 GEHR, we confirm that the correlation between the emission-line luminosity and ionized-gas velocity dispersion is a viable luminosity indicator, and use it to test and compare the standard model $\\Lambda$CDM and the $R_{\\rm h}=ct$ Universe by optimizing the parameters in each cosmology using a maximization of the likelihood function. For the flat $\\Lambda$CDM model, the best fit is obtained with $\\Omega_{\\rm m}= 0.40_{-0.09}^{+0.09}$. However, statistical tools, such as the Akaike (AIC), Kullback (KIC) and Bayes (BIC) Information Criteria favor $R_{\\rm h}=ct$ over the standard model with a likelihood of $\\approx 94.8\\%-98.8\\%$ versus only $\\approx 1.2\\%-5.2\\%$. For $w$CDM (the version of $\\Lambda$CDM with a dark-energy...

  3. Comparative evaluation of spectroscopic models using different multivariate statistical tools in a multicancer scenario.

    Science.gov (United States)

    Ghanate, A D; Kothiwale, S; Singh, S P; Bertrand, Dominique; Krishna, C Murali

    2011-02-01

    Cancer is now recognized as one of the major causes of morbidity and mortality. Histopathological diagnosis, the gold standard, is shown to be subjective, time consuming, prone to interobserver disagreement, and often fails to predict prognosis. Optical spectroscopic methods are being contemplated as adjuncts or alternatives to conventional cancer diagnostics. The most important aspect of these approaches is their objectivity, and multivariate statistical tools play a major role in realizing it. However, rigorous evaluation of the robustness of spectral models is a prerequisite. The utility of Raman spectroscopy in the diagnosis of cancers has been well established. Until now, the specificity and applicability of spectral models have been evaluated for specific cancer types. In this study, we have evaluated the utility of spectroscopic models representing normal and malignant tissues of the breast, cervix, colon, larynx, and oral cavity in a broader perspective, using different multivariate tests. The limit test, which was used in our earlier study, gave high sensitivity but suffered from poor specificity. The performance of other methods such as factorial discriminant analysis and partial least square discriminant analysis are at par with more complex nonlinear methods such as decision trees, but they provide very little information about the classification model. This comparative study thus demonstrates not just the efficacy of Raman spectroscopic models but also the applicability and limitations of different multivariate tools for discrimination under complex conditions such as the multicancer scenario.

  4. An ECOOP web portal for visualising and comparing distributed coastal oceanography model and in situ data

    Science.gov (United States)

    Gemmell, A. L.; Barciela, R. M.; Blower, J. D.; Haines, K.; Harpham, Q.; Millard, K.; Price, M. R.; Saulter, A.

    2011-07-01

    As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.

  5. An ECOOP web portal for visualising and comparing distributed coastal oceanography model and in situ data

    Directory of Open Access Journals (Sweden)

    A. L. Gemmell

    2011-07-01

    Full Text Available As part of a large European coastal operational oceanography project (ECOOP, we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS and Web Feature Service (WFS respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards.

    The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data.

    Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.

  6. Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise.

    Science.gov (United States)

    Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A

    2015-04-21

    The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.

  7. Comparing risk of failure models in water supply networks using ROC curves

    Energy Technology Data Exchange (ETDEWEB)

    Debon, A., E-mail: andeau@eio.upv.e [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Carrion, A. [Centro de Gestion de la Calidad y del Cambio, Dpt. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cabrera, E. [Dpto. De Ingenieria Hidraulica Y Medio Ambiente, Instituto Tecnologico del Agua, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Solano, H. [Universidad Diego Portales, Santiago (Chile)

    2010-01-15

    The problem of predicting the failure of water mains has been considered from different perspectives and using several methodologies in engineering literature. Nowadays, it is important to be able to accurately calculate the failure probabilities of pipes over time, since water company profits and service quality for citizens depend on pipe survival; forecasting pipe failures could have important economic and social implications. Quantitative tools (such as managerial or statistical indicators and reliable databases) are required in order to assess the current and future state of networks. Companies managing these networks are trying to establish models for evaluating the risk of failure in order to develop a proactive approach to the renewal process, instead of using traditional reactive pipe substitution schemes. The main objective of this paper is to compare models for evaluating the risk of failure in water supply networks. Using real data from a water supply company, this study has identified which network characteristics affect the risk of failure and which models better fit data to predict service breakdown. The comparison using the receiver operating characteristics (ROC) graph leads us to the conclusion that the best model is a generalized linear model. Also, we propose a procedure that can be applied to a pipe failure database, allowing the most appropriate decision rule to be chosen.

  8. Modelling formulations using gene expression programming--a comparative analysis with artificial neural networks.

    Science.gov (United States)

    Colbourn, E A; Roskilly, S J; Rowe, R C; York, P

    2011-10-09

    This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Comparative analysis of realistic CT-scan and simplified human airway models in airflow simulation.

    Science.gov (United States)

    Johari, Nasrul Hadi; Osman, Kahar; Helmi, Nor Harris N; Abdul Kadir, Mohammed A Rafiq

    2015-01-01

    Efforts to model the human upper respiratory system have undergone many phases. Geometrical proximity to the realistic shape has been the subject of many research projects. In this study, three different geometries of the trachea and main bronchus were modelled, which were reconstructed from computed tomography (CT) scan images. The geometrical variations were named realistic, simplified and oversimplified. Realistic refers to the lifelike image taken from digital imaging and communications in medicine format CT scan images, simplified refers to the reconstructed image based on natural images without realistic details pertaining to the rough surfaces, and oversimplified describes the straight wall geometry of the airway. The characteristics of steady state flows with different flow rates were investigated, simulating three varied physical activities and passing through each model. The results agree with previous studies where simplified models are sufficient for providing comparable results for airflow in human airways. This work further suggests that, under most exercise conditions, the idealised oversimplified model is not favourable for simulating either airflow regimes or airflow with particle depositions. However, in terms of immediate analysis for the prediction of abnormalities of various dimensions of human airways, the oversimplified techniques may be used.

  10. Comparative evaluation of spectroscopic models using different multivariate statistical tools in a multicancer scenario

    Science.gov (United States)

    Ghanate, A. D.; Kothiwale, S.; Singh, S. P.; Bertrand, Dominique; Krishna, C. Murali

    2011-02-01

    Cancer is now recognized as one of the major causes of morbidity and mortality. Histopathological diagnosis, the gold standard, is shown to be subjective, time consuming, prone to interobserver disagreement, and often fails to predict prognosis. Optical spectroscopic methods are being contemplated as adjuncts or alternatives to conventional cancer diagnostics. The most important aspect of these approaches is their objectivity, and multivariate statistical tools play a major role in realizing it. However, rigorous evaluation of the robustness of spectral models is a prerequisite. The utility of Raman spectroscopy in the diagnosis of cancers has been well established. Until now, the specificity and applicability of spectral models have been evaluated for specific cancer types. In this study, we have evaluated the utility of spectroscopic models representing normal and malignant tissues of the breast, cervix, colon, larynx, and oral cavity in a broader perspective, using different multivariate tests. The limit test, which was used in our earlier study, gave high sensitivity but suffered from poor specificity. The performance of other methods such as factorial discriminant analysis and partial least square discriminant analysis are at par with more complex nonlinear methods such as decision trees, but they provide very little information about the classification model. This comparative study thus demonstrates not just the efficacy of Raman spectroscopic models but also the applicability and limitations of different multivariate tools for discrimination under complex conditions such as the multicancer scenario.

  11. Comparing personality disorder models: cross-method assessment of the FFM and DSM-IV-TR.

    Science.gov (United States)

    Samuel, Douglas B; Widiger, Thomas W

    2010-12-01

    The current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR; American Psychiatric Association, 2000) defines personality disorders as categorical entities that are distinct from each other and from normal personality traits. However, many scientists now believe that personality disorders are best conceptualized using a dimensional model of traits that span normal and abnormal personality, such as the Five-Factor Model (FFM). However, if the FFM or any dimensional model is to be considered as a credible alternative to the current model, it must first demonstrate an increment in the validity of the assessment offered within a clinical setting. Thus, the current study extended previous research by comparing the convergent and discriminant validity of the current DSM-IV-TR model to the FFM across four assessment methodologies. Eighty-eight individuals receiving ongoing psychotherapy were assessed for the FFM and the DSM-IV-TR personality disorders using self-report, informant report, structured interview, and therapist ratings. The results indicated that the FFM had an appreciable advantage over the DSM-IV-TR in terms of discriminant validity and, at the domain level, convergent validity. Implications of the findings and directions for future research are discussed.

  12. Numerical study of surface energy partitioning on the Tibetan plateau: comparative analysis of two biosphere models

    Directory of Open Access Journals (Sweden)

    J. Hong

    2010-02-01

    Full Text Available The Tibetan Plateau is a critical region in the research of biosphere-atmosphere interactions on both regional and global scales due to its relation to Asian summer monsoon and El Niño. The unique environment on the Plateau provides valuable information for the evaluation of the models' surface energy partitioning associated with the summer monsoon. In this study, we investigated the surface energy partitioning on this important area through comparative analysis of two biosphere models constrained by the in-situ observation data. Indeed, the characteristics of the Plateau provide a unique opportunity to clarify the structural deficiencies of biosphere models as well as new insight into the surface energy partitioning on the Plateau. Our analysis showed that the observed inconsistency between the two biosphere models was mainly related to: 1 the parameterization for soil evaporation; 2 the way to deal with roughness lengths of momentum and scalars; and 3 the parameterization of subgrid velocity scale for aerodynamic conductance. Our study demonstrates that one should carefully interpret the modeling results on the Plateau especially during the pre-monsoon period.

  13. Comparing potential recharge estimates from three Land Surface Models across the western US

    Science.gov (United States)

    Niraula, Rewati; Meixner, Thomas; Ajami, Hoori; Rodell, Matthew; Gochis, David; Castro, Christopher L.

    2017-02-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01% to 15% for Mosaic, 3.2% to 42% for Noah, and 6.7% to 31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge in data limited regions.

  14. Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors

    Science.gov (United States)

    Kalyvas, N.; Liaparinos, P.

    2014-03-01

    Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.

  15. Numerical study of surface energy partitioning on the Tibetan Plateau: comparative analysis of two biosphere models

    Directory of Open Access Journals (Sweden)

    J. Hong

    2009-11-01

    Full Text Available The Tibetan Plateau is a critical region in the research of biosphere-atmosphere interactions on both regional and global scales due to its relation to Asian summer monsoon and El Niño. The unique environment on the Plateau provides valuable information for the evaluation of the models' surface energy partitioning associated with the summer monsoon. In this study, we investigated the surface energy partitioning on this important area through comparative analysis of two biosphere models constrained by the in-situ observation data. Indeed, the characteristics of the Plateau provide a unique opportunity to clarify the structural deficiencies of biosphere models as well as new insight into the surface energy partitioning on the Plateau. Our analysis showed that the observed inconsistency between the two biosphere models was mainly related to: 1 the parameterization for soil evaporation; 2 the way to deal with roughness lengths of momentum and scalars; and 3 the parameterization of subgrid velocity scale for aerodynamic conductance. Our study demonstrates that one should carefully interpret the modeling results on the Plateau especially during the pre-monsoon period.

  16. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    Science.gov (United States)

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would

  17. Interaction of ordinary Portland cement and Opalinus Clay: Dual porosity modelling compared to experimental data

    Science.gov (United States)

    Jenni, A.; Gimmi, T.; Alt-Epping, P.; Mäder, U.; Cloet, V.

    2017-06-01

    Interactions between concrete and clays are driven by the strong chemical gradients in pore water and involve mineral reactions in both materials. In the context of a radioactive waste repository, these reactions may influence safety-relevant clay properties such as swelling pressure, permeability or radionuclide retention. Interfaces between ordinary Portland cement and Opalinus Clay show weaker, but more extensive chemical disturbance compared to a contact between low-pH cement and Opalinus Clay. As a consequence of chemical reactions porosity changes occur at cement-clay interfaces. These changes are stronger and may lead to complete pore clogging in the case of low-pH cements. The prediction of pore clogging by reactive transport simulations is very sensitive to the magnitude of diffusive solute fluxes, cement clinker chemistry, and phase reaction kinetics. For instance, the consideration of anion-depleted porosity in clays substantially influences overall diffusion and pore clogging at interfaces. A new concept of dual porosity modelling approximating Donnan equilibrium is developed and applied to an ordinary Portland cement - Opalinus Clay interface. The model predictions are compared with data from the cement-clay interaction (CI) field experiment in the Mt Terri underground rock laboratory (Switzerland), which represent 5 y of interaction. The main observations such as the decalcification of the cement at the interface, the Mg enrichment in the clay detached from the interface, and the S enrichment in the cement detached from the interface, are qualitatively predicted by the new model approach. The model results reveal multiple coupled processes that create the observed features. The quantitative agreement of modelled and measured data can be improved if uncertainties of key input parameters (tortuosities, reaction kinetics, especially of clay minerals) can be reduced.

  18. Comparative study of dental arch width in plaster models, photocopies and digitized images

    Directory of Open Access Journals (Sweden)

    Maria Cristina Rosseto

    2009-06-01

    Full Text Available The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65, both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05. Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width; 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width; 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width. There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.

  19. Comparing acceleration and speed tuning in macaque MT: physiology and modeling.

    Science.gov (United States)

    Price, N S C; Ono, S; Mustari, M J; Ibbotson, M R

    2005-11-01

    Studies of individual neurons in area MT have traditionally investigated their sensitivity to constant speeds. We investigated acceleration sensitivity in MT neurons by comparing their responses to constant steps and linear ramps in stimulus speed. Speed ramps constituted constant accelerations and decelerations between 0 and 240 degrees /s. Our results suggest that MT neurons do not have explicit acceleration sensitivity, although speed changes affected their responses in three main ways. First, accelerations typically evoked higher responses than the corresponding deceleration rate at all rates tested. We show that this can be explained by adaptation mechanisms rather than differential processing of positive and negative speed gradients. Second, we inferred a cell's preferred speed from the responses to speed ramps by finding the stimulus speed at the latency-adjusted time when response amplitude peaked. In most cells, the preferred speeds inferred from deceleration were higher than those for accelerations of the same rate or from steps in stimulus speed. Third, neuron responses to speed ramps were not well predicted by the transient or sustained responses to steps in stimulus speed. Based on these findings, we developed a model incorporating adaptation and a neuron's speed tuning that predicted the higher inferred speeds and lower spike rates for deceleration responses compared with acceleration responses. This model did not predict acceleration-specific responses, in accordance with the lack of acceleration sensitivity in the neurons. The outputs of this single-cell model were passed to a population-vector-based model used to estimate stimulus speed and acceleration. We show that such a model can accurately estimate relative speed and acceleration using information from the population of neurons in area MT.

  20. Exploring the recent trend in esophageal adenocarcinoma incidence and mortality using comparative simulation modeling.

    Science.gov (United States)

    Kong, Chung Yin; Kroep, Sonja; Curtius, Kit; Hazelton, William D; Jeon, Jihyoun; Meza, Rafael; Heberle, Curtis R; Miller, Melecia C; Choi, Sung Eun; Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Feuer, Eric J; Inadomi, John M; Hur, Chin; Luebeck, E Georg

    2014-06-01

    The incidence of esophageal adenocarcinoma (EAC) has increased five-fold in the United States since 1975. The aim of our study was to estimate future U.S. EAC incidence and mortality and to shed light on the potential drivers in the disease process that are conduits for the dramatic increase in EAC incidence. A consortium of three research groups calibrated independent mathematical models to clinical and epidemiologic data including EAC incidence from the Surveillance, Epidemiology, and End Results (SEER 9) registry from 1975 to 2010. We then used a comparative modeling approach to project EAC incidence and mortality to year 2030. Importantly, all three models identified birth cohort trends affecting cancer progression as a major driver of the observed increases in EAC incidence and mortality. All models predict that incidence and mortality rates will continue to increase until 2030 but with a plateauing trend for recent male cohorts. The predicted ranges of incidence and mortality rates (cases per 100,000 person years) in 2030 are 8.4 to 10.1 and 5.4 to 7.4, respectively, for males, and 1.3 to 1.8 and 0.9 to 1.2 for females. Estimates of cumulative cause-specific EAC deaths between both sexes for years 2011 to 2030 range between 142,300 and 186,298, almost double the number of deaths in the past 20 years. Through comparative modeling, the projected increases in EAC cases and deaths represent a critical public health concern that warrants attention from cancer control planners to prepare potential interventions. Quantifying this burden of disease will aid health policy makers to plan appropriate cancer control measures. Cancer Epidemiol Biomarkers Prev; 23(6); 997-1006. ©2014 AACR. ©2014 American Association for Cancer Research.

  1. Comparing stream-specific to generalized temperature models to guide salmonid management in a changing climate

    Science.gov (United States)

    Andrew K. Carlson,; William W. Taylor,; Hartikainen, Kelsey M.; Dana M. Infante,; Beard, Douglas; Lynch, Abigail

    2017-01-01

    Global climate change is predicted to increase air and stream temperatures and alter thermal habitat suitability for growth and survival of coldwater fishes, including brook charr (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss). In a changing climate, accurate stream temperature modeling is increasingly important for sustainable salmonid management throughout the world. However, finite resource availability (e.g. funding, personnel) drives a tradeoff between thermal model accuracy and efficiency (i.e. cost-effective applicability at management-relevant spatial extents). Using different projected climate change scenarios, we compared the accuracy and efficiency of stream-specific and generalized (i.e. region-specific) temperature models for coldwater salmonids within and outside the State of Michigan, USA, a region with long-term stream temperature data and productive coldwater fisheries. Projected stream temperature warming between 2016 and 2056 ranged from 0.1 to 3.8 °C in groundwater-dominated streams and 0.2–6.8 °C in surface-runoff dominated systems in the State of Michigan. Despite their generally lower accuracy in predicting exact stream temperatures, generalized models accurately projected salmonid thermal habitat suitability in 82% of groundwater-dominated streams, including those with brook charr (80% accuracy), brown trout (89% accuracy), and rainbow trout (75% accuracy). In contrast, generalized models predicted thermal habitat suitability in runoff-dominated streams with much lower accuracy (54%). These results suggest that, amidst climate change and constraints in resource availability, generalized models are appropriate to forecast thermal conditions in groundwater-dominated streams within and outside Michigan and inform regional-level salmonid management strategies that are practical for coldwater fisheries managers, policy makers, and the public. We recommend fisheries professionals reserve resource

  2. Comparing stochastic differential equations and agent-based modelling and simulation for early-stage cancer.

    Science.gov (United States)

    Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe

    2014-01-01

    There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm.

  3. Comparing stochastic differential equations and agent-based modelling and simulation for early-stage cancer.

    Directory of Open Access Journals (Sweden)

    Grazziela P Figueredo

    Full Text Available There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1 Does this new stochastic formulation produce similar results to the agent-based version? (2 Can these methods be used interchangeably? (3 Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm.

  4. Comparing estimates of climate change impacts from process-based and statistical crop models

    Science.gov (United States)

    Lobell, David B.; Asseng, Senthold

    2017-01-01

    The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally

  5. Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Chandan; Raustad, Richard

    2013-07-01

    Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

  6. Comparative characterization of the microbial diversities of an artificial microbialite model and a natural stromatolite.

    Science.gov (United States)

    Havemann, Stephanie A; Foster, Jamie S

    2008-12-01

    Microbialites are organosedimentary structures that result from the trapping, binding, and lithification of sediments by microbial mat communities. In this study we developed a model artificial microbialite system derived from natural stromatolites, a type of microbialite, collected from Exuma Sound, Bahamas. We demonstrated that the morphology of the artificial microbialite was consistent with that of the natural system in that there was a multilayer community with a pronounced biofilm on the surface, a concentrated layer of filamentous cyanobacteria in the top 5 mm, and a lithified layer of fused oolitic sand grains in the subsurface. The fused grain layer was comprised predominantly of the calcium carbonate polymorph aragonite, which corresponded to the composition of the Bahamian stromatolites. The microbial diversity of the artificial microbialites and that of natural stromatolites were also compared using automated ribosomal intergenic spacer analysis (ARISA) and 16S rRNA gene sequencing. The ARISA profiling indicated that the Shannon indices of the two communities were comparable and that the overall diversity was not significantly lower in the artificial microbialite model. Bacterial clone libraries generated from each of the three artificial microbialite layers and natural stromatolites indicated that the cyanobacterial and crust layers most closely resembled the ecotypes detected in the natural stromatolites and were dominated by Proteobacteria and Cyanobacteria. We propose that such model artificial microbialites can serve as experimental analogues for natural stromatolites.

  7. Comparing regional precipitation and temperature extremes in climate model and reanalysis products

    Directory of Open Access Journals (Sweden)

    Oliver Angélil

    2016-09-01

    Full Text Available A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averages over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.

  8. Comparing regional precipitation and temperature extremes in climate model and reanalysis products.

    Science.gov (United States)

    Angélil, Oliver; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V; Stone, Dáithí; Donat, Markus G; Wehner, Michael; Shiogama, Hideo; Ciavarella, Andrew; Christidis, Nikolaos

    2016-09-01

    A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averages over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.

  9. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  10. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    Science.gov (United States)

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  11. Context-dependent mutation rates may cause spurious signatures of a fixation bias favoring higher GC-content in humans.

    Science.gov (United States)

    Hernandez, Ryan D; Williamson, Scott H; Zhu, Lan; Bustamante, Carlos D

    2007-10-01

    Understanding the proximate and ultimate causes underlying the evolution of nucleotide composition in mammalian genomes is of fundamental interest to the study of molecular evolution. Comparative genomics studies have revealed that many more substitutions occur from G and C nucleotides to A and T nucleotides than the reverse, suggesting that mammalian genomes are not at equilibrium for base composition. Analysis of human polymorphism data suggests that mutations that increase GC-content tend to be at much higher frequencies than those that decrease or preserve GC-content when the ancestral allele is inferred via parsimony using the chimpanzee genome. These observations have been interpreted as evidence for a fixation bias in favor of G and C alleles due to either positive natural selection or biased gene conversion. Here, we test the robustness of this interpretation to violations of the parsimony assumption using a data set of 21,488 noncoding single nucleotide polymorphisms (SNPs) discovered by the National Institute of Environmental Health Sciences (NIEHS) SNPs project via direct resequencing of n = 95 individuals. Applying standard nonparametric and parametric population genetic approaches, we replicate the signatures of a fixation bias in favor of G and C alleles when the ancestral base is assumed to be the base found in the chimpanzee outgroup. However, upon taking into account the probability of misidentifying the ancestral state of each SNP using a context-dependent mutation model, the corrected distribution of SNP frequencies for GC-content increasing SNPs are nearly indistinguishable from the patterns observed for other types of mutations, suggesting that the signature of fixation bias is a spurious artifact of the parsimony assumption.

  12. Comparative Study of Elastic Network Model and Protein Contact Network for Protein Complexes: The Hemoglobin Case

    Directory of Open Access Journals (Sweden)

    Guang Hu

    2017-01-01

    Full Text Available The overall topology and interfacial interactions play key roles in understanding structural and functional principles of protein complexes. Elastic Network Model (ENM and Protein Contact Network (PCN are two widely used methods for high throughput investigation of structures and interactions within protein complexes. In this work, the comparative analysis of ENM and PCN relative to hemoglobin (Hb was taken as case study. We examine four types of structural and dynamical paradigms, namely, conformational change between different states of Hbs, modular analysis, allosteric mechanisms studies, and interface characterization of an Hb. The comparative study shows that ENM has an advantage in studying dynamical properties and protein-protein interfaces, while PCN is better for describing protein structures quantitatively both from local and from global levels. We suggest that the integration of ENM and PCN would give a potential but powerful tool in structural systems biology.

  13. Comparing anesthesia with isoflurane and fentanyl/fluanisone/midazolam in a rat model of cardiac arrest

    DEFF Research Database (Denmark)

    Secher, Niels; Malte, Christian Lind; Tønnesen, Else

    2016-01-01

    CA model. We hypothesize that isoflurane anesthesia improves short-term outcome following resuscitation from CA compared with a subcutaneous fentanyl/fluanisone/midazolam anesthesia. METHODS: Male Sprague Dawley rats were randomized to anesthesia with isoflurane (n=11) or fentanyl...... samples for Endothelin-1 and cathecolamines were drawn before and after CA. KEY FINDINGS: Compared with fentanyl/fluanisone/midazolam anesthesia, isoflurane resulted in a shorter time to return of spontaneous circulation (ROSC), less use of epinephrine, increased coronary perfusion pressure during CPR......, higher mean arterial pressure post ROSC, increased plasma levels of Endothelin-1 and decreased levels of epinephrine. The choice of anesthesia did not affect ROSC rate or systemic O2 consumption. CONCLUSION: Isoflurane reduces time to ROSC, increases coronary perfusion pressure, and improves hemodynamic...

  14. Modeling vancomycin release kinetics from microporous calcium phosphate ceramics comparing static and dynamic immersion conditions.

    Science.gov (United States)

    Gbureck, Uwe; Vorndran, Elke; Barralet, Jake E

    2008-09-01

    The release kinetics of vancomycin from calcium phosphate dihydrate (brushite) matrices and polymer/brushite composites were compared using different fluid replacement regimes, a regular replacement (static conditions) and a continuous flow technique (dynamic conditions). The use of a constantly refreshed flowing resulted in a faster drug release due to a constantly high diffusion gradient between drug loaded matrix and the eluting medium. Drug release was modeled using the Weibull, Peppas and Higuchi equations. The results showed that drug liberation was diffusion controlled for the ceramics matrices, whereas ceramics/polymer composites led to a mixed diffusion and degradation controlled release mechanism. The continuous flow technique was for these materials responsible for a faster release due to an accelerated polymer degradation rate compared with the regular fluid replacement technique.

  15. A Comparative Study of Relational and Non-Relational Database Models in a Web- Based Application

    Directory of Open Access Journals (Sweden)

    Cornelia Gyorödi

    2015-11-01

    Full Text Available The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. We will also present the advantages of using a non-relational database compared to a relational database integrated in a web-based application, which needs to manipulate a big amount of data.

  16. A computer modeling tool for comparing novel ICD electrode orientations in children and adults.

    Science.gov (United States)

    Jolley, Matthew; Stinstra, Jeroen; Pieper, Steve; Macleod, Rob; Brooks, Dana H; Cecchin, Frank; Triedman, John K

    2008-04-01

    Use of implantable cardiac defibrillators (ICDs) in children and patients with congenital heart disease is complicated by body size and anatomy. A variety of creative implantation techniques has been used empirically in these groups on an ad hoc basis. To rationalize ICD placement in special populations, we used subject-specific, image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode configurations. FEMs were created by segmenting normal torso computed tomography scans of subjects ages 2, 10, and 29 years and 1 adult with congenital heart disease into tissue compartments, meshing, and assigning tissue conductivities. The FEMs were modified by interactive placement of ICD electrode models in clinically relevant electrode configurations, and metrics of relative defibrillation safety and efficacy were calculated. Predicted DFTs for standard transvenous configurations were comparable with published results. Although transvenous systems generally predicted lower DFTs, a variety of extracardiac orientations were also predicted to be comparably effective in children and adults. Significant trend effects on DFTs were associated with body size and electrode length. In many situations, small alterations in electrode placement and patient anatomy resulted in significant variation of predicted DFT. We also show patient-specific use of this technique for optimization of electrode placement. Image-based FEMs allow predictive modeling of defibrillation scenarios and predict large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement. Further development and validation are needed for clinical or industrial utilization.

  17. Luminal cells are favored as the cell of origin for prostate cancer

    Science.gov (United States)

    Wang, Zhu A.; Toivanen, Roxanne; Bergren, Sarah K.; Chambon, Pierre; Shen, Michael M.

    2014-01-01

    The identification of cell types of origin for cancer has important implications for tumor stratification and personalized treatment. For prostate cancer, the cell of origin has been intensively studied, but it has remained unclear whether basal or luminal epithelial cells, or both, represent cells of origin under physiological conditions in vivo. Here, we use a novel lineage-tracing strategy to assess the cell of origin in a diverse range of mouse models, including Nkx3.1+/–; Pten+/–, Pten+/–, Hi-Myc, and TRAMP mice, as well as a hormonal carcinogenesis model. Our results show that luminal cells are consistently the observed cell of origin for each model in situ; however, explanted basal cells from these mice can generate tumors in grafts. Consequently, we propose that luminal cells are favored as cells of origin in many contexts, whereas basal cells only give rise to tumors after differentiation into luminal cells. PMID:25176651

  18. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.; Wang, Wei; Thomsen, Edwin C.; Reed, David M.; Li, Bin; Balducci, Patrick J.; Kintner-Meyer, Michael CW; Sprenkle, Vincent L.

    2015-10-20

    A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been shared with the redox flow battery community to both validate their stack data and guide future direction.

  19. Comparing mesoscale chemistry-transport model and remote-sensed Aerosol Optical Depth

    CERN Document Server

    Carnevale, C; Pisoni, E; Volta, M

    2010-01-01

    A comparison of modeled and observed Aerosol Optical Depth (AOD) is presented. 3D Eulerian multiphase chemistry-transport model TCAM is employed for simulating AOD at mesoscale. MODIS satellite sensor and AERONET photometer AOD are used for comparing spatial patterns and temporal timeseries. TCAM simulations for year 2004 over a domain containing Po-Valley and nearly whole Northern Italy are employed. For the computation of AOD, a configuration of external mixing of the chemical species is considered. Furthermore, a parametrization of the effect of moisture affecting both aerosol size and composition is used. An analysis of the contributions of the granulometric classes to the extinction coefficient reveals the dominant role of the inorganic compounds of submicron size. For the analysis of spatial patterns, summer and winter case study are considered. TCAM AOD reproduces spatial patterns similar to those retrieved from space, but AOD values are generally smaller by an order of magnitude. However, accounting a...

  20. Comparative modeling of Bronze Age land use in the Malatya Plain (Turkey)

    Science.gov (United States)

    Arıkan, Bülent; Restelli, Francesca Balossi; Masi, Alessia

    2016-03-01

    Computational modeling in archeology has proven to be a useful tool in quantifying changes in the paleoenvironment. This especially useful method combines data from diverse disciplines to answer questions focusing on the complex and non-linear aspects of human-environment interactions. The research presented here uses various proxy records to compare the changes in climate during the Bronze Age in the Malatya Plain in eastern Anatolia, which is situated at the northern extremity of northern Mesopotamia. Extensive agropastoral land use modeling was applied to three sites of different size and function in the Malatya Plain during the Early Bronze Age I period to simulate the varying scale and intensity of human impacts in relation to changes in the level of social organization, demography, and temporal length. The results suggest that even in land use types subjected to a light footprint, the scale and intensity of anthropogenic impacts change significantly in relation to the level of social organization.

  1. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  2. Mobile Agent-Based Software Systems Modeling Approaches: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Aissam Belghiat

    2016-06-01

    Full Text Available Mobile agent-based applications are special type of software systems which take the advantages of mobile agents in order to provide a new beneficial paradigm to solve multiple complex problems in several fields and areas such as network management, e-commerce, e-learning, etc. Likewise, we notice lack of real applications based on this paradigm and lack of serious evaluations of their modeling approaches. Hence, this paper provides a comparative study of modeling approaches of mobile agent-based software systems. The objective is to give the reader an overview and a thorough understanding of the work that has been done and where the gaps in the research are.

  3. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    Science.gov (United States)

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  4. Comparative analysis of hourly and dynamic power balancing models for validating future energy scenarios

    DEFF Research Database (Denmark)

    Pillai, Jayakrishnan R.; Heussen, Kai; Østergaard, Poul Alberg

    2011-01-01

    Energy system analyses on the basis of fast and simple tools have proven particularly useful for interdisciplinary planning projects with frequent iterations and re-evaluation of alternative scenarios. As such, the tool “EnergyPLAN” is used for hourly balanced and spatially aggregate annual......, the model is verified on the basis of the existing energy mix on Bornholm as an islanded energy system. Future energy scenarios for the year 2030 are analysed to study a feasible technology mix for a higher share of wind power. Finally, the results of the hourly simulations are compared to dynamic frequency...... simulations incorporating the Vehicle-to-grid technology. The results indicate how the EnergyPLAN model may be improved in terms of intra-hour variability, stability and ancillary services to achieve a better reflection of energy and power capacity requirements....

  5. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  6. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  7. A Comparative Study of Relational and Non-Relational Database Models in a Web- Based Application

    OpenAIRE

    Cornelia Gyorödi; Robert Gyorödi; Roxana Sotoc

    2015-01-01

    The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. W...

  8. Comparative transcriptional network modeling of three PPAR-α/γ co-agonists reveals distinct metabolic gene signatures in primary human hepatocytes.

    Directory of Open Access Journals (Sweden)

    Renée Deehan

    Full Text Available AIMS: To compare the molecular and biologic signatures of a balanced dual peroxisome proliferator-activated receptor (PPAR-α/γ agonist, aleglitazar, with tesaglitazar (a dual PPAR-α/γ agonist or a combination of pioglitazone (Pio; PPAR-γ agonist and fenofibrate (Feno; PPAR-α agonist in human hepatocytes. METHODS AND RESULTS: Gene expression microarray profiles were obtained from primary human hepatocytes treated with EC(50-aligned low, medium and high concentrations of the three treatments. A systems biology approach, Causal Network Modeling, was used to model the data to infer upstream molecular mechanisms that may explain the observed changes in gene expression. Aleglitazar, tesaglitazar and Pio/Feno each induced unique transcriptional signatures, despite comparable core PPAR signaling. Although all treatments inferred qualitatively similar PPAR-α signaling, aleglitazar was inferred to have greater effects on high- and low-density lipoprotein cholesterol levels than tesaglitazar and Pio/Feno, due to a greater number of gene expression changes in pathways related to high-density and low-density lipoprotein metabolism. Distinct transcriptional and biologic signatures were also inferred for stress responses, which appeared to be less affected by aleglitazar than the comparators. In particular, Pio/Feno was inferred to increase NFE2L2 activity, a key component of the stress response pathway, while aleglitazar had no significant effect. All treatments were inferred to decrease proliferative signaling. CONCLUSIONS: Aleglitazar induces transcriptional signatures related to lipid parameters and stress responses that are unique from other dual PPAR-α/γ treatments. This may underlie observed favorable changes in lipid profiles in animal and clinical studies with aleglitazar and suggests a differentiated gene profile compared with other dual PPAR-α/γ agonist treatments.

  9. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    Science.gov (United States)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  10. A Comparative Study on Several Models of Experimental Renal Calcium Oxalate Stones Formation in Rats

    Institute of Scientific and Technical Information of China (English)

    LIU Jihong; CAO Zhenggno; ZHANG Zhaohui; ZHOU Siwei; YE Zhangqun

    2007-01-01

    In order to compare the effects of several experimental renal calcium oxalate stones formation models in rats and to find a simple and convenient model with significant effect of calcium oxalate crystals deposition in the kidney, several rat models of renal calcium oxalate stones formation were induced by some crystal-inducing drugs (CID) including ethylene glycol (EG), ammonium chloride (AC), vitamin D3 [1α(OH)VitD3, alfacalcidol], calcium gluconate, ammonium oxalate, gentamicin sulfate, L-hydroxyproline. The rats were fed with drugs given singly or unitedly. At the end of experiment, 24-h urines were collected and the serum creatinine (Cr), blood urea nitrogen (BUN), the extents of calcium oxalate crystal deposition in the renal tissue, urinary calcium and oxalate excretion were measured. The serum Cr levels in the stone-forming groups were significantly higher than those in the control group except for the group EG+L-hydroxyproline, group calcium gluconate and group oxalate. Blood BUN concentration was significantly higher in rats fed with CID than that in control group except for group EG+L-hydroxyproline and group ammonium oxalate plus calcium gluconate. In the group of rats administered with EG plus Vitamin D3, the deposition of calcium oxalate crystal in the renal tissue and urinary calcium excretion were significantly greater than other model groups. The effect of the model induced by EG plus AC was similar to that in the group induced by EG plus Vitamin D3. EG plus Vitamin D3 or EG plus AC could stably and significantly induced the rat model of renal calcium oxalate stones formation.

  11. A comparative modeling and molecular docking study on Mycobacterium tuberculosis targets involved in peptidoglycan biosynthesis.

    Science.gov (United States)

    Fakhar, Zeynab; Naiker, Suhashni; Alves, Claudio N; Govender, Thavendran; Maguire, Glenn E M; Lameira, Jeronimo; Lamichhane, Gyanu; Kruger, Hendrik G; Honarparvar, Bahareh

    2016-11-01

    An alarming rise of multidrug-resistant Mycobacterium tuberculosis strains and the continuous high global morbidity of tuberculosis have reinvigorated the need to identify novel targets to combat the disease. The enzymes that catalyze the biosynthesis of peptidoglycan in M. tuberculosis are essential and noteworthy therapeutic targets. In this study, the biochemical function and homology modeling of MurI, MurG, MraY, DapE, DapA, Alr, and Ddl enzymes of the CDC1551 M. tuberculosis strain involved in the biosynthesis of peptidoglycan cell wall are reported. Generation of the 3D structures was achieved with Modeller 9.13. To assess the structural quality of the obtained homology modeled targets, the models were validated using PROCHECK, PDBsum, QMEAN, and ERRAT scores. Molecular dynamics simulations were performed to calculate root mean square deviation (RMSD) and radius of gyration (Rg) of MurI and MurG target proteins and their corresponding templates. For further model validation, RMSD and Rg for selected targets/templates were investigated to compare the close proximity of their dynamic behavior in terms of protein stability and average distances. To identify the potential binding mode required for molecular docking, binding site information of all modeled targets was obtained using two prediction algorithms. A docking study was performed for MurI to determine the potential mode of interaction between the inhibitor and the active site residues. This study presents the first accounts of the 3D structural information for the selected M. tuberculosis targets involved in peptidoglycan biosynthesis.

  12. Comparing uncertainty resulting from two-step and global regression procedures applied to microbial growth models.

    Science.gov (United States)

    Martino, K G; Marks, B P

    2007-12-01

    Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.

  13. Modeling Regional Dynamics of Human-Rangifer Systems: a Framework for Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Matthew Berman

    2013-12-01

    Full Text Available Theoretical models of interaction between wild and domestic reindeer (Rangifer tarandus; caribou in North America can help explain observed social-ecological dynamics of arctic hunting and husbandry systems. Different modes of hunting and husbandry incorporate strategies to mitigate effects of differing patterns of environmental uncertainty. Simulations of simple models of harvested wild and domestic herds with density-dependent recruitment show that random environmental variation produces cycles and crashes in populations that would quickly stabilize at a steady state with nonrandom parameters. Different husbandry goals lead to radically different long-term domestic herd sizes. Wild and domestic herds are typically ecological competitors but social complements. Hypothesized differences in ecological competition and diverse human livelihoods are explored in dynamic social-ecological models in which domestic herds competitively interact with wild herds. These models generate a framework for considering issues in the evolution of Human-Rangifer Systems, such as state-subsidized herding and the use of domestic herds for transportation support in hunting systems. Issues considered include the role of geographic factors, markets for Rangifer products, state-subsidized herding, effects of changes in husbandry goals on fate of wild herds, and how environmental shocks, herd population cycles, and policy shifts might lead to system state changes. The models also suggest speculation on the role of geographic factors in the failure of reindeer husbandry to take hold in the North American Arctic. The analysis concludes with suggested empirical strategies for estimating parameters of the model for use in comparative studies across regions of the Arctic.

  14. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups.

    Science.gov (United States)

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2014-10-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as waste processing options, and for the PLA and biopaper cup also composting and anaerobic digestion. Multiple data sets and modelling choices were systematically used to calculate average results and the spread in results for each disposable cup in eleven impact categories. The LCA results of all combinations of data sets and modelling choices consistently identify three processes that dominate the environmental impact: (1) production of the cup's basic material (PS, PLA, biopaper), (2) cup manufacturing, and (3) waste processing. The large spread in results for impact categories strongly overlaps among the cups, however, and therefore does not allow a preference for one type of cup material. Comparison of the individual waste treatment options suggests some cautious preferences. The average waste treatment results indicate that recycling is the preferred option for PLA cups, followed by anaerobic digestion and incineration. Recycling is slightly preferred over incineration for the biopaper cups. There is no preferred waste treatment option for the PS cups. Taking into account the spread in waste treatment results for all cups, however, none of these preferences for waste processing options can be justified. The only exception is composting, which is least preferred for both PLA and biopaper cups. Our study illustrates that using multiple data sets and modelling choices can lead to considerable spread in LCA results. This makes comparing products more complex, but the outcomes more robust.

  15. A comparative study of physical and numerical modeling of tidal network ontogeny

    Science.gov (United States)

    Zhou, Zeng; Olabarrieta, Maitane; Stefanon, Luana; D'Alpaos, Andrea; Carniello, Luca; Coco, Giovanni

    2014-04-01

    We investigate the initiation and long-term evolution of tidal networks by comparing controlled laboratory experiments and their associated scaling laws with outputs from a numerical model. We conducted numerical experiments at both the experimental laboratory scale (ELS) and natural estuary scale (NES) and compared these simulations with experimental data and field observations. Sensitivity tests show that initial bathymetry, frictional parametrization, sediment transport, and bed slope terms play an important role in determining the morphodynamic evolution and the final landscape. Consistent with experimental observations, the morphodynamic feedbacks between flow, sediment transport, and bathymetry gradually lead the system to a less dynamic state, finally reaching a stable network configuration. In both the ELS and NES simulations, the initially planar lagoon with large intertidal areas is subject to erosion, indicating ebb-dominance. Based on quantitative analyses of the ELS and the NES simulations (e.g., geometric characteristics and relationship between modified tidal prism and cross-sectional area), we conclude that numerical simulations are consistent with laboratory experiments and show that both type of models provide a realistic, albeit simplified, representation of natural systems. The combination of laboratory and numerical experiments also allowed us to explore the possibility of reaching a long-term morphodynamic equilibrium. Both the physical and numerical models approach a dynamic equilibrium characterized by negligible gradients in sediment fluxes. The equilibrium configuration appears to be consistent with traditional relationships linking tidal prism and cross-sectional area of the inlet. Finally, this contribution highlights the significance of complementary research between experimental and numerical modeling in investigating long-term morphodynamics of tidal networks.

  16. Comparing the Goodness of Different Statistical Criteria for Evaluating the Soil Water Infiltration Models

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2016-02-01

    Full Text Available Introduction: The infiltration process is one of the most important components of the hydrologic cycle. Quantifying the infiltration water into soil is of great importance in watershed management. Prediction of flooding, erosion and pollutant transport all depends on the rate of runoff which is directly affected by the rate of infiltration. Quantification of infiltration water into soil is also necessary to determine the availability of water for crop growth and to estimate the amount of additional water needed for irrigation. Thus, an accurate model is required to estimate infiltration of water into soil. The ability of physical and empirical models in simulation of soil processes is commonly measured through comparisons of simulated and observed values. For these reasons, a large variety of indices have been proposed and used over the years in comparison of infiltration water into soil models. Among the proposed indices, some are absolute criteria such as the widely used root mean square error (RMSE, while others are relative criteria (i.e. normalized such as the Nash and Sutcliffe (1970 efficiency criterion (NSE. Selecting and using appropriate statistical criteria to evaluate and interpretation of the results for infiltration water into soil models is essential because each of the used criteria focus on specific types of errors. Also, descriptions of various goodness of fit indices or indicators including their advantages and shortcomings, and rigorous discussions on the suitability of each index are very important. The objective of this study is to compare the goodness of different statistical criteria to evaluate infiltration of water into soil models. Comparison techniques were considered to define the best models: coefficient of determination (R2, root mean square error (RMSE, efficiency criteria (NSEI and modified forms (such as NSEjI, NSESQRTI, NSElnI and NSEiI. Comparatively little work has been carried out on the meaning and

  17. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  18. Groundwater Development Stress: Global-Scale Indices Compared to Regional Modeling.

    Science.gov (United States)

    Alley, William M; Clark, Brian R; Ely, David M; Faunt, Claudia C

    2017-08-15

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem. © 2017, National Ground Water Association.

  19. Comparing the Impact of Mobile Nodes Arrival Patterns in Manets using Poisson and Pareto Models

    Directory of Open Access Journals (Sweden)

    John Tengviel

    2013-10-01

    Full Text Available Mobile Ad hoc Networks (MANETs are dynamic networks populated by mobile stations, or mobile nodes(MNs. Mobility model is a hot topic in many areas, for example, protocol evaluation, networkperformance analysis and so on.How to simulate MNs mobility is the problem we should consider if wewant to build an accurate mobility model. When new nodes can join and other nodes can leave the networkand therefore the topology is dynamic.Specifically, MANETs consist of a collection of nodes randomlyplaced in a line (not necessarily straight. MANETs do appear in many real-world network applicationssuch as a vehicular MANETs built along a highway in a city environment or people in a particularlocation. MNs in MANETs are usually laptops, PDAs or mobile phones.This paper presents comparative results that have been carried out via Matlab software simulation. Thestudy investigates the impact of mobility predictive models on mobile nodes’ parameters such as, thearrival rate and the size of mobile nodes in a given area using Pareto and Poisson distributions. Theresults have indicated that mobile nodes’ arrival rates may have influence on MNs population (as a largernumber in a location. The Pareto distribution is more reflective of the modeling mobility for MANETsthan the Poisson distribution.

  20. Comparative modelling and molecular docking of nitrate reductase from Bacillus weihenstephanensis (DS45

    Directory of Open Access Journals (Sweden)

    R. Seenivasagan

    2016-07-01

    Full Text Available Nitrate reductase catalyses the oxidation of NAD(PH and the reduction of nitrate to nitrite. NR serves as a central point for the integration of metabolic pathways by governing the flux of reduced nitrogen through several regulatory mechanisms in plants, algae and fungi. Bacteria express nitrate reductases that convert nitrate to nitrite, but mammals lack these specific enzymes. The microbial nitrate reductase reduces toxic compounds to nontoxic compounds with the help of NAD(PH. In the present study, our results revealed that Bacillus weihenstephanensis expresses a nitrate reductase enzyme, which was made to generate the 3D structure of the enzyme. Six different modelling servers, namely Phyre2, RaptorX, M4T Server, HHpred, SWISS MODEL and Mod Web, were used for comparative modelling of the structure. The model was validated with standard parameters (PROCHECK and Verify 3D. This study will be useful in the functional characterization of the nitrate reductase enzyme and its docking with nitrate molecules, as well as for use with autodocking.